AI in accountancy How to mitigate the risks
AI in accountancy How to mitigate
the risks
Ahead of the publication of a major Shaping the Profession report on the impact of generative artificial intelligence on accountancy, Ewan McCall, who leads ICAS’ research and thought leadership work, talks to Ryan Herman about its findings, the tech’s revolutionary power and the critical errors on which CAs must keep a watchful eye
Ahead of the publication of a major Shaping the Profession report on the impact of generative artificial intelligence on accountancy, Ewan McCall, who leads ICAS’ research and thought leadership work, talks to Ryan Herman about its findings, the tech’s revolutionary power and the critical errors on which CAs must keep a watchful eye
If 2025 was the year for mass adoption of AI tools across small and medium-sized business, then 2026 will be the year when some businesses will fall foul of placing all their trust in a bot. That is the assertion of ICAS Research and Thought Leadership Manager, Ewan McCall.
McCall and the Shaping the Profession (StP) team at ICAS are putting together the finishing touches on a new report titled Generative AI and Professional Judgement in Accounting.
“Shaping the Profession is about what the profession will look like in five, 10 or even 20 years, and what both chartered accountancy and ICAS can do to meet those expectations and be fit for the future,” says McCall.
“Our earlier reports have focused on what society expects from accountants – and the number one thing that came up as both a challenge and an opportunity was AI, and technology in general. Fifty-four per cent of the CAs we interviewed as part of the research for our Society First report, identified technology and AI as the greatest opportunity.”
The full report will be published in mid-March, but McCall gives us a flavour of what to expect, and explains why its recommendations can be applied to businesses across multiple sectors.
To help demonstrate the pace of change, in January this year, McCall and colleagues conducted an internal workshop at ICAS, examining a set of predictions finance professionals made about the future in 2016.
One of the 2016 predictions that certainly came true was that use of video-conferencing tools such as Teams or Zoom would become commonplace. Less accurate was the belief that bitcoin and blockchain would completely disrupt the profession. But one thing stood out. “No one was really talking about AI,” says McCall.
Make no mistake, AI tools are transforming and affecting every professional job and sector. As McCall, who has an MA in history, adds: “The rate of change here is potentially one or two orders of magnitude greater than the Industrial Revolution, computerisation and even digitalisation.
“The ability to exercise professional judgement is central to the role of the accountant and how they deliver value. Is this person someone you can trust and do they have your best interests at heart? And how do you still retain that quality of professional judgement in a world where that can potentially be put at risk by vastly speeding up a process?
“Our previous report, Society First, published in September 2025, revealed that the key attributes which give the public trust in accountants, aside from their professional qualification, is whether they believe they have their best interests at heart, understand their needs and can be completely honest with them. These are all about emotional intelligence.”
McCall points to other professions and sectors, where there have been cases of individuals relying on AI without doing their due diligence (see panel below).
“I think 2025 was the year when AI became the thing that everyone was talking about. And I suspect 2026 will see a bit of pushback caused by lots of critical failures happening. So what we’re offering here is advice and guidance on how to make sure you won’t be one of those failures.”
“Don’t try to blur the lines separating responsibility and accountability between yourself and AI. Above all, there needs to be a human at the centre of the judgement process”
The upcoming report sets out to tackle two questions
1. How does Gen AI impact the professional judgement process of accounting professionals?
2. What are the ethical implications of using Gen AI when developing and making professional judgements in accounting settings?
“The main finding is that there needs to be a human at the centre of the judgement process. AI can only perform as well as the information it’s given – weak inputs produce weak outputs. Don’t try to blur the lines separating responsibility and accountability between yourself and AI. Above all, there needs to be a human at the centre of the judgement process.”
The report looks at a medium-size accounting practice to see how it has dealt with the early stages of using AI, and the risks and opportunities that come with it.
“The research team looked at how people are using AI, their seniority, and what they are using it for. They then went into greater detail by following four people and seeing how they used it over the course of six months.”
From there, the report identifies four main case studies
1
Corporate finance
research
2
Internal audit
benchmarking
3
Audit planning via board‑minutes review
4
Companies House information extraction
So, where do the risks lie when using AI? While the team don’t want to give too much away, a key statement of the report will come as no surprise: “Gen AI can enhance speed, breadth and efficiency if accountants remain critical, reflective, and ethically grounded.
“The profession must adapt to ensure that AI strengthens – not weakens – the quality and trustworthiness of accounting work.”
The report also covers some of the pitfalls that can complicate what might seem like ‘harmless’ use of Gen AI, including hidden bias, over-reliance on AI-generated structures, loss of professional scepticism, ethical concerns about transparency and data integrity, and weakening training pathways for junior staff.
The research team have also come up with a set of recommendations that will appear in the final report, which includes:
• Always check with original sources that AI summaries are accurate.
• AI can gradually learn your habits and biases – which may narrow your perspective.
• Keep an eye on whether the AI is guiding or filtering the information you’re given.
The report adds: “What becomes clear across all use cases is that the introduction of Gen AI tools to support the professional judgement process requires a robust professional response, including attention to professional ethics, governance, and oversight.”
Generative AI and Professional Judgement in Accounting will be published later this month. Read Society First here
So, where do the risks lie when using AI? While the team don’t want to give too much away, a key statement of the report will come as no surprise: “Gen AI can enhance speed, breadth and efficiency if accountants remain critical, reflective, and ethically grounded.
“The profession must adapt to ensure that AI strengthens – not weakens – the quality and trustworthiness of accounting work.”
The report also covers some of the pitfalls that can complicate what might seem like ‘harmless’ use of Gen AI, including hidden bias, over-reliance on AI-generated structures, loss of professional scepticism, ethical concerns about transparency and data integrity, and weakening training pathways for junior staff.
The research team have also come up with an extensive set of recommendations that will appear in the final report, which includes:
• Always check with original sources that AI summaries are accurate.
• AI can gradually learn your habits and biases – which may narrow your perspective.
• Keep an eye on whether the AI is guiding or filtering the information you’re given.
The report adds: “What becomes clear across all use cases is that the introduction of Gen AI tools to support the professional judgement process requires a robust professional response, including attention to professional ethics, governance, and oversight.”
Generative AI and Professional Judgement in Accounting will be published later this month. Read Society First here
Intelligence failures
The ICAS report reiterates that, used properly, AI is an asset for accountants. Simply left to replace human judgement, however, it is potentially disastrous. Here are some examples of what can go wrong when people in positions of responsibility opt to let the tech do their thinking.
Publishing
In May 2025, the Chicago-Sun Times newspaper published its recommended summer books reading list. The list included several real and famous authors, but often for books that didn’t exist – among them Hamnet author Maggie O’Farrell, for the non-existent Migrations, and Percival Everett for The Rainmakers. The article had been supplied by an agency which syndicates content. The newspaper withdrew the piece, apologised and said it was “committed to making sure this never happens again”. The agency’s contract was terminated.
Law
The Haringey Law Centre’s claim against the London borough of Haringey for failing to provide its client with accommodation relied on ‘phantom case law’ five times, according to a 2025 report in the Guardian. The pupil barrister presenting the case denied consciously using AI, but admitted she may have relied on Google AI summaries without realising. The law centre, which is a charity, and its lawyer were deemed negligent in a case brought for wasted legal costs.
Medicine
A new Reuters report highlights potential problems with an AI-enhanced navigation system which has seen several people injured after, it is claimed, the tech misinformed surgeons about the exact location of their instruments during operations. In one case, a surgeon punctured the base of a patient’s skull, while two other patients suffered strokes after a major artery was injured. While lawsuits have been filed, there is as yet no definitive ruling blaming AI, though the report does find that FDA-approved devices using AI have been recalled at twice the rate of all FDA-approved devices.
Intelligence failures
The ICAS report reiterates that, used properly, AI is an asset for accountants. Simply left to replace human judgement, however, it is potentially disastrous. Here are some examples of what can go wrong when people in positions of responsibility opt to let the tech do their thinking.
Publishing
In May 2025, the Chicago-Sun Times newspaper published its recommended summer books reading list. The list included several real and famous authors, but often for books that didn’t exist – among them Hamnet author Maggie O’Farrell, for the non-existent Migrations, and Percival Everett for The Rainmakers. The article had been supplied by an agency which syndicates content. The newspaper withdrew the piece, apologised and said it was “committed to making sure this never happens again”. The agency’s contract was terminated.
Law
The Haringey Law Centre’s claim against the London borough of Haringey for failing to provide its client with accommodation relied on ‘phantom case law’ five times, according to a 2025 report in the Guardian. The pupil barrister presenting the case denied consciously using AI, but admitted she may have relied on Google AI summaries without realising. The law centre, which is a charity, and its lawyer were deemed negligent in a case brought for wasted legal costs.
Medicine
A new Reuters report highlights potential problems with an AI-enhanced navigation system which has seen several people injured after, it is claimed, the tech misinformed surgeons about the exact location of their instruments during operations. In one case, a surgeon punctured the base of a patient’s skull, while two other patients suffered strokes after a major artery was injured. While lawsuits have been filed, there is as yet no definitive ruling blaming AI, though the report does find that FDA-approved devices using AI have been recalled at twice the rate of all FDA-approved devices.

