ETHICAL
USE OF AI

ETHICAL
USE OF AI

James Barbour CA, Director of Policy Leadership, on the need for CAs to remember their Code of Ethics when using AI

This year has the potential to be a pivotal one for AI. The recent debate over whether the latest tools from US tech company Anthropic are a gamechanger for professional services continues to gather pace.

It’s just one example of how these tools could significantly impact the way we do business, deliver professional services and even how we charge for the value we provide. 

In its 2025 Code of Ethics, ICAS incorporated the technology-related revisions issued by the International Ethics Standards Board for Accountants (IESBA). These were approved in December 2022, issued in 2023 and became effective in December 2024. Since then, the pace of change has accelerated, especially with the growing importance of Gen AI tools such as ChatGPT and Copilot. 

IESBA continues to monitor these rapidly changing technological developments. So far, it hasn’t seen the need to make any further revisions to its code. However, in its proposed strategy for the period 2028–2031, “digital transformation” features prominently, highlighting three areas in particular likely to shape the future of standard setting.

Increasing use of emerging technologies: Businesses and industries are transforming through the adoption and increasing use of emerging technologies such as AI-enabled tools and agents, the ‘internet of things’ and blockchain. These technologies offer huge opportunities to improve quality, effectiveness and efficiency. At the same time, they raise new and different challenges and risks that need to be properly managed.

Digital assets and their institutionalisation: With the growing use of emerging technologies such as blockchain, digital assets – including cryptocurrencies – have become more popular, and their institutionalisation is accelerating.

Cyber-enabled financial crime: The increasing use of emerging technologies has also led to greater cybersecurity threats and a significant rise in illegal behaviour. Digital systems, networks and AI-enabled tools and agents can be exploited to commit, conceal or facilitate financial misconduct.

CAs should stay informed about these trends and continue developing their skills. This will enable them to take advantage of the opportunities AI offers, for themselves and for those they support, while also putting proper safeguards in place to mitigate risk.

James Barbour CA, Director of Policy Leadership, on the need for CAs to remember their Code of Ethics when using AI

This year has the potential to be a pivotal one for AI. The recent debate over whether the latest tools from US tech company Anthropic are a gamechanger for professional services continues to gather pace.

It’s just one example of how these tools could significantly impact the way we do business, deliver professional services and even how we charge for the value we provide. 

In its 2025 Code of Ethics, ICAS incorporated the technology-related revisions issued by the International Ethics Standards Board for Accountants (IESBA). These were approved in December 2022, issued in 2023 and became effective in December 2024. Since then, the pace of change has accelerated, especially with the growing importance of Gen AI tools such as ChatGPT and Copilot. 

IESBA continues to monitor these rapidly changing technological developments. So far, it hasn’t seen the need to make any further revisions to its code. However, in its proposed strategy for the period 2028–2031, “digital transformation” features prominently, highlighting three areas in particular likely to shape the future of standard setting.

Increasing use of emerging technologies: Businesses and industries are transforming through the adoption and increasing use of emerging technologies such as AI-enabled tools and agents, the ‘internet of things’ and blockchain. These technologies offer huge opportunities to improve quality, effectiveness and efficiency. At the same time, they raise new and different challenges and risks that need to be properly managed.

Digital assets and their institutionalisation: With the growing use of emerging technologies such as blockchain, digital assets – including cryptocurrencies – have become more popular, and their institutionalisation is accelerating.

Cyber-enabled financial crime: The increasing use of emerging technologies has also led to greater cybersecurity threats and a significant rise in illegal behaviour. Digital systems, networks and AI-enabled tools and agents can be exploited to commit, conceal or facilitate financial misconduct.

CAs should stay informed about these trends and continue developing their skills. This will enable them to take advantage of the opportunities AI offers, for themselves and for those they support, while also putting proper safeguards in place to mitigate risk.

Accountability
and integrity
  

When considering the potential ethical implications of using AI tools, the letters themselves offer a useful reminder. The “A” can stand for accountability and the “I” for integrity.  

ICAS’ ethical leadership campaign, the Power of One, relaunched in November to mark its 10th anniversary, focuses on the need for CAs to demonstrate ethical leadership and take personal responsibility for their actions. In the context of AI, no matter how the tool is used, accountability always rests with a human being, at least for the moment. This may be the actual user of the AI, but responsibility can extend more widely.

For example, the January retirement of the Chief Constable of West Midlands Police followed concerns which included improper reliance on inaccurate AI-generated information when assessing potential crowd trouble at a football match in autumn 2025. This was another example of an AI ‘hallucination’ – where AI produces confident but false, misleading or nonsensical outputs.  

Hallucinations can include fabricated facts (for example, non-existent legal cases or research studies), fake news or false historical data. There are various potential reasons for this, including:

Confabulation – when an AI system fills gaps in its knowledge with information that sounds plausible but is factually incorrect.

Inaccurate predictions – where the model incorrectly forecasts events, such as weather or financial data.

False context – where the model provides accurate information that is not relevant to the user’s prompt. (This illustrates the importance of drafting clear and appropriate prompts).

When considering the potential ethical implications of using AI tools, the letters themselves offer a useful reminder. The “A” can stand for accountability and the “I” for integrity.  

ICAS’ ethical leadership campaign, the Power of One, relaunched in November to mark its 10th anniversary, focuses on the need for CAs to demonstrate ethical leadership and take personal responsibility for their actions. In the context of AI, no matter how the tool is used, accountability always rests with a human being, at least for the moment. This may be the actual user of the AI, but responsibility can extend more widely.

For example, the January retirement of the Chief Constable of West Midlands Police followed concerns which included improper reliance on inaccurate AI-generated information when assessing potential crowd trouble at a football match in autumn 2025. This was another example of an AI “hallucination” – where AI produces confident but false, misleading or nonsensical outputs.  

Hallucinations can include fabricated facts (for example, non-existent legal cases or research studies), fake news or false historical data. There are various potential reasons for this, including:

Confabulation – when an AI system fills gaps in its knowledge with information that sounds plausible but is factually incorrect.

Inaccurate predictions – where the model incorrectly forecasts events, such as weather or financial data.

False context – where the model provides accurate information that is not relevant to the user’s prompt. (This illustrates the importance of drafting clear and appropriate prompts).

Does the ICAS Code of Ethics deal with this risk? 

You won’t be surprised to find that the answer is yes. The code already contains material that covers this potential threat. The supporting material for the integrity principle highlights that:

“R111.2 A professional accountant shall not knowingly be associated with reports, returns communications or other information where the accountant believes that the information:

(a) Contains a materially false or misleading statement;

(b) Contains statements or information provided recklessly; or

(c) Omits or obscures required information where such omission or obscurity would be misleading.”

Even though this guidance was not written with AI-generated outputs in mind, it’s still just as relevant when assessing these as it is when reviewing information from colleagues.

“Where an accountant knows the information with which they are associated is misleading, they must take appropriate steps to fix the problem”

If you are using the output from an AI tool, you must consider whether that information could be seen as misleading. As noted above, AI tools may be subject to hallucinations or perhaps place too much emphasis on less reliable content.

Sections 220 (professional accountants in business) and 320 (professional accountants in practice) of the code outline the factors accountants need to consider when relying on technology‑generated information, including AI. These include:

• How much you plan to use or rely on the technology’s output.

• Whether the technology has been properly tested and checked for the intended purpose.

• Your past experience with the technology and whether its use for this purpose is generally accepted.

• Whether the inputs going into the technology are appropriate, including the data and the decisions people make when using it.

If a professional accountant knows or suspects the information connected to their work is misleading, they must take appropriate action to fix the problem. The code explains the actions you might need to take, such as correcting the information and, if it has already been shared, letting the intended users know the correct version.

If a CA believes the organisation or firm hasn’t taken appropriate action, they may need to take further steps, while still respecting confidentiality. These could include consulting with ICAS, the internal or external auditor, or legal counsel. You may also need to consider whether there’s a requirement to inform any third parties, including the users of the information or relevant regulators and oversight bodies.

If, after exhausting all reasonable options, no appropriate action has been taken and the information is still believed to be misleading, the accountant must refuse to be associated with it. In some cases, this may mean resigning from their role.

In short, the key takeaway is that CAs should be aware that AI tools can sometimes produce hallucinations and they must carefully check any AI-generated information before using it.

Explore our AI and Technology hub for up-to-date guidance, new and best practice resources

You won’t be surprised to find that the answer is yes. The code already contains material that covers this potential threat. The supporting material for the integrity principle highlights that:

“R111.2 A professional accountant shall not knowingly be associated with reports, returns communications or other information where the accountant believes that the information:

(a) Contains a materially false or misleading statement;

(b) Contains statements or information provided recklessly; or

(c) Omits or obscures required information where such omission or obscurity would be misleading.”

Even though this guidance was not written with AI-generated outputs in mind, it’s still just as relevant when assessing these as it is when reviewing information from colleagues.

“Where an accountant knows the information with which they are associated is misleading, they must take appropriate steps to fix the problem”

If you are using the output from an AI tool, you must consider whether that information could be seen as misleading. As noted above, AI tools may be subject to hallucinations or perhaps place too much emphasis on less reliable content.

Sections 220 (professional accountants in business) and 320 (professional accountants in practice) of the code outline the factors accountants need to consider when relying on technology‑generated information, including AI. These include:

• How much you plan to use or rely on the technology’s output.

• Whether the technology has been properly tested and checked for the intended purpose.

• Your past experience with the technology and whether its use for this purpose is generally accepted.

• Whether the inputs going into the technology are appropriate, including the data and the decisions people make when using it.

If a professional accountant knows or suspects the information connected to their work is misleading, they must take appropriate action to fix the problem. The code explains the actions you might need to take, such as correcting the information and, if it has already been shared, letting the intended users know the correct version.

If a CA believes the organisation or firm hasn’t taken appropriate action, they may need to take further steps, while still respecting confidentiality. These could include consulting with ICAS, the internal or external auditor, or legal counsel. You may also need to consider whether there’s a requirement to inform any third parties, including the users of the information or relevant regulators and oversight bodies.

If, after exhausting all reasonable options, no appropriate action has been taken and the information is still believed to be misleading, the accountant must refuse to be associated with it. In some cases, this may mean resigning from their role.

In short, the key takeaway is that CAs should be aware that AI tools can sometimes produce hallucinations and they must carefully check any AI-generated information before using it.

Explore our AI and Technology hub for up-to-date guidance, new and best practice resources