Risk managers warned generative AI is a threat to all businesses

Companies urged to swiftly put in place AI policies and training

All risk professionals should consider the threats of generative AI to their business, warned a panel webinar hosted by Commercial Risk.

Jonathan Shehan, technology and transformation risk manager at UK health insurer Simplyhealth, and Scott Fenstermaker, vice-president of digital marketing at Riskonnect, agreed that even a business that opts out of using AI services should be alive to the risks through its supply chain, third-party services or employees’ use of services such as ChatGPT.

“If you haven’t specifically blocked ChatGPT, your staff will be using it and those risks will start materialising,” Sheehan said. “You ignore these risks at your peril.”

Riskonnect’s inaugural New Generation of Risk Survey of 300 risk managers globally, with the majority of respondents in the US, revealed 7% do not anticipate any threats to their business from generative AI. “It’s really quite surprising,” Shehan said. “Hopefully that 7% is now closer to 0% as we know these things exist and should be anticipated.”

Shehan said risks from generative AI have already materialised and have become live issues for organisations.

The majority of survey respondents (65%) said data privacy is the leading threat posed by generative AI, followed by decisions based on inaccurate information at 59%, employee misuse at 55%, copyright and IP risks at 34% and discrimination risks at 17%.

“There is an element that as risk professionals we’re slightly on the backfoot already and reactive to generative AI services. But the risks exist – we can’t just go ahead without considering data security,” Shehan warned risk managers.

“We’ve seen chatbots and other AI chat services disclosing sensitive information with the right prompts – if you start convincing ChatGPT or other services you are a system administrator, there’s evidence of it giving up its back end code and starting to reveal some of that sensitive information,” Shehan explained.

Fenstermaker said he thinks discrimination risks are currently underappreciated, given companies could be using AI systems with built-in bias or discrimination. “The intrinsic bias that exists within humans can manifest in the AI system,” Fenstermaker said.

“The first place insurers are likely going to see these risks is in fraud and the lawsuits that will inevitably result from it,” Fenstermaker said. He explained that if it is proved that an AI system has been trained on bias or discriminatory data, a lawsuit could follow.

At the same time, it will become increasingly difficult for organisations who do not want to use AI strategically. Fenstermaker said AI will become something that everybody uses. “There’s so much productivity value that it becomes a cultural meme,” he said.

Fenstermaker and Shehan agreed that risk managers need to formulate and embed policies for using AI within the business and establish reasonable use for employees.

But the Riskonnect survey revealed that 53% of risk managers have not delivered formal training or briefed their entire company on risks related to generative AI, nor do they plan to do so. Just 17% of risk managers polled said they have fully trained staff and a further 29% plan to do so.

“AI training is a must – every organisation needs to have some training in their 2024 plan to go out to staff on AI,” Shehan said.

“Get in now and help them understand where the risks could be of using ChatGPT,” Shehan advised, warning there could be risks for firms where employees input customer data or commercially sensitive data.

Employee training should be backed up and supported by an AI policy, both experts agreed. This should incorporate an acceptable use policy for AI platforms on par with acceptable use policies for the internet, so staff are aware of their obligations.

Fenstermaker stressed the importance of a third-party AI risk policy, with Shehan drawing on his company’s experience of being the first globally to introduce Salesforce’s Einstein GPT, when its back end is OpenAI. He said it is important to consider what data OpenAI has access to and advised firms to set clear contractual lines in the early stages of negotiations with AI services to protect sensitive data.

Shehan advised firms to conduct full risk assessments and due diligence when onboarding an AI service or implementing a new AI-based system. This should include documenting data flows, identifying top level risks and potential controls.

Riskonnect’s survey also revealed a split between risk managers on their levels of confidence in the accuracy and quality of their risk management data, with 59% confident, 23% very confident and 18% not confident at all.

Fenstermaker said the rise of the chief risk officer role has placed greater value on risk analysis, including data, and this needs to be secured.

“More and more, risk analysis is having an input in the strategic guidance of a company. Data from risk and risk reporting is having an increasingly direct impact on the strategic influence of a company. There is a premium to ensuring there is data integrity,” he said.

Riskonnect will follow up its inaugural New Generation Of Risk Survey annually to track changes in risk managers’ perceptions of generative AI risks.

Back to top button