AI Adoption in the Workplace Outpaces Risk Management Strategies

AI Adoption in the Workplace Outpaces Risk Management Strategies

AuditBoard recently commissioned The Harris Poll to survey 1,150 employed Americans on the use of AI-powered tools in the workplace and the presence of basic risk management controls. The results highlight interesting trends and potential concerns in AI adoption in the workplace. After running the same survey in 2023, the results remain substantially unchanged — despite increasing awareness about AI and the proliferation of associated risks that accompany its potential advantages. 

Regarding adoption, the survey found that about half of employed Americans (55%) use AI-powered tools (e.g., ChatGPT, DALL·E 2, Grammarly) for work, which is fairly consistent with 2023 results (51%). 

However, less than half of employed Americans (42%) say their company has a formal policy for using non-company-supplied AI-powered tools for work — a significant increase from 2023 (37%). From a risk management perspective, this gap represents an unmitigated risk in which employees use AI however they choose with sensitive company information. 

The following survey result solidifies this concern by confirming that nearly half of employed Americans (49%) have entered company data into an AI-powered tool their company did not supply to help them do their work. This finding is basically unchanged from 2023 (48%) and highlights risks associated with data security, privacy, and AI functioning as employees choose their AI tools without proper vetting by IT security professionals. 

The following statistic from the survey underscores a more significant concern. Nearly two-thirds (65%) of employed Americans say using AI-powered tools for work is safe and secure. 

The survey results draw attention to a major risk tied to AI – a human cognitive bias known as the Dunning-Kruger effect. The Dunning-Kruger effect explains our tendency toward overconfidence when ignorant of potential risks. In an AI context, this cognitive bias could lead employees to overestimate the capabilities of an AI tool when they lack an understanding of the technology.

For example, an employee might use an unapproved AI tool to analyze confidential company data in a manner that produces an unintended result. The tool may provide calculations or conclusions without knowing its limitations, similar to the Dunning-Kruger effect in humans. The employee could take the inaccurate results at face value, placing too much trust in the AI’s capabilities.

Conclusion

Overconfidence and a lack of understanding of AI’s limitations can pose severe risks to any organization. The survey findings underscore the need for robust connected risk strategies. The most basic controls needed include clear guidelines for AI tool usage, data handling, and employee education on AI’s limitations. The use of AI tools in a business setting will continue to expand with or without formal intervention, so a comprehensive approach to risk management and policy development is crucial as we continue to integrate AI tools into the workplace.

About the Survey

This survey was conducted online within the United States by The Harris Poll on behalf of AuditBoard from April 9 – 11, 2024 among 1,150 employed adults ages 18 and older.  The sampling precision of Harris online polls is measured by using a Bayesian credible interval.  For this study, the sample data is accurate to within +/- 3.5 percentage points using a 95% confidence level. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact press@auditboard.com.