Technology

Why proper governance of AI will matter in workplaces in 2026

BearingPoint’s Barry Haycock and Rosie Bowser discuss the evolution of AI in the workplace and the importance of governance in 2026.

AI in the workplace is becoming more and more common.

Last September, Ibec, a representative group for Irish businesses, released a report showing a surge in the use of AI among the Irish workforce. For example, in July 2025, 40pc of workers reported using AI at work, compared to 19pc in August 2024.

Barry Haycock, senior manager of data analytics and AI at BearingPoint, believes AI in the workplace has moved from “experimental to operational”.

“Pilots and agents are becoming the norm, but we’re also seeing automation of complex information tasks like contract review, compliance checks, large-scale document processing, advanced search across enterprise data,” he told SiliconRepublic.com.

“For large-scale work, we’re seeing ‘AI factories’ being used as businesses seek to automate AI pipelines. Advanced analytics allows business teams to uncover insights without deep expertise.”

However, Haycock says that “sustainable value” in terms of technology still depends on governance, data maturity and the skill of the workforce.

“Without management and measurable results, pilots die,” he explains. “AI must be increasingly integrated and tailored to business needs.

“Data management and model interpretation are understood to be very enabling. Security, controlled exposure and interpretation must be addressed early.”

Rosie Bowser, a data analytics and AI consultant at BearingPoint, says they’ve seen a “temptation” for organizations to rush to implement new AI solutions – and that the “greatest value creation” happens when the solution is focused on a clearly defined problem or workflow.

“Starting with a tool is not like painting over structural cracks: it may look like progress, but it doesn’t solve the root problem. So, as an organization, you have to be ready as the technology is, and that may involve acknowledging and correcting the immaturity of the organization before rolling out a new AI solution.”

Accessory, not independent

Concerns about job replacement by AI have been rampant since the topic of workplace AI emerged. The concern is understandable, especially in light of recent AI-related layoffs.

Haycock believes that AI is more likely to “reinvent” work, rather than eliminate it entirely.

“The real danger is failure to retrain and adapt,” he says. “It will automatically do anything that can happen, especially repetitive cognitive tasks. Organizations that invest in the workforce and reposition people in high-value work will benefit greatly.”

Bowser agrees, asserting that the real danger is “standing” rather than being replaced. “Organizations that do not fully support skills development may find their employees unable to work safely and confidently in AI-enabled processes,” he said.

Bowser adds that companies should view AI as a workflow accelerator, “rather than an autonomous decision maker”.

“An AI system should be able to take on repetitive, rule-based parts of the work, but we still need humans to be in charge and make the final decisions,” he explains. “The importance of ownership here is not a backlog consideration; with the AI ​​Act’s emphasis on traceability and model evolution, this will be critical going forward.”

Prevailing rule

Haycock says that by 2026, AI governance will be less about pilots and “more about evidence”.

“As the EU AI Act comes into effect and Ireland’s National Digital and AI Strategy 2030 sets clear expectations around responsible procurement, organizations will need to demonstrate documentation, transparency and auditability,” he said.

“I believe that customer expectations will rise, and companies will need to meet that demand. In addition, oversight must be risk-balanced and focused on performance. The differentiator will be reliable governance that enables innovation while resisting public regulation and scrutiny.”

Bowser says governance needs to “feel real and tangible”, with measures such as clear rules around data management, audit trails and back-tracking measures, and knowing what the model actually does. The important thing, he says, is to make management efficient enough that people can follow it “without conflict”.

“If you were starting your AI journey in 2026,” said Bowser, “the lesson for me is that there is often documentation done in many organizations already, but do the people on the ground know where that documentation is? Do they know who owns the data, do they know what they can do securely?

“Organizations need to be aware of how people have adopted AI in their daily lives and how they expect to be able to bring it into their work life, otherwise you end up with AI practices that can bring great danger. Now that the EU AI Law is in effect, these risks could be even greater.”

Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button