top of page

Gartner Survey Shows AI-Related Risks See Greatest Audit Coverage Increases in 2024

CAEs Expect Audit Coverage of AI-related Risks to Grow as Organizations Race to Adopt the Technology

The rapid growth and adoption of generative AI (GenAI) has resulted in a scramble to provide audit coverage over potential risks arising from use of the technology, according to a survey by Gartner, Inc.

“As organizations increase their use of new AI technology, many internal auditors are looking to expand their coverage in this area,” said Thomas Teravainen, research specialist with the Gartner for Legal, Risk & Compliance Leaders practice. “There are a range of AI-related risks that organizations face from control failures and unreliable outputs to advanced cyberthreats. Half of the top six risks with the greatest increase in audit coverage are AI-related.”

Gartner surveyed 102 chief audit executives (CAEs) in August 2023 to rate the importance of providing assurance over 35 risks. The top six risks with the greatest potential audit coverage increases are strategic change management, diversity equity and inclusion, organizational culture, AI-enabled cyberthreats, AI control failures and unreliable outputs from AI models (see Figure 1).

Figure 1: Greatest Potential Increases in Audit Coverage


Confidence Gaps for AI Risks

“Perhaps the most striking finding from this data is the degree to which internal auditors lack confidence in their ability to provide effective oversight on AI risks,” said Teravainen. “No more than 11% of respondents rating one of the aforementioned three top AI-related risks as very important considered themselves very confident in providing assurance over it.”

Publicly available GenAI applications, and those built in-house, create new and heightened risks for data and information security, privacy, IP protection and copyright infringement, as well as trust and reliability of outputs. Many enterprise GenAI initiatives are in customer-facing business units, and the proliferation of GenAI makes increasing coverage of unreliable outputs from AI models (e.g., biased or inaccurate information and hallucinations from AI models) a priority to protect the organization from reputational damage or potential legal action.

“With such a broad array of potential risks coming from all over the business, it’s easy to understand why auditors aren’t confident about their ability to apply assurance,” said Teravainen. “However, with CEOs and CFOs rating AI as the technology that will most significantly impact their organizations in the next three years, continued gaps in confidence will undermine CAEs’ ability to meet stakeholder expectations.”

12 views0 comments



2023 @ Inno-Thought and its affiliates. All rights reserved.

bottom of page