Airmic calls for debate on managing AI ethical risks

Survey reveals only 6% of UK organisations separate AI from other ethical risks

UK risk management association Airmic has called for more discussion about how best to manage specific ethical risks created by artificial intelligence (AI) and steer organisations through a series of contentious issues.

The comments came after a new poll of Airmic members reveals that the majority of UK risk managers (59%) say their firm does not currently separate ethical risks of AI from other ethical risks.

Only 6% of UK risk managers say their firm treats ethical AI risks separately and give the area extra attention within their risk management framework. Twenty-four percent say it depends on the issue and 12% say they don’t know whether ethical risks of AI are treated separately.

Airmic said more bodies are calling on organisations to establish AI ethics committees and separate AI risk assessment frameworks.

Its survey further finds that risk managers are evenly split over whether to separate ethical risks from AI, with 51% not in favour and 49% supporting such a move.

Julia Graham, CEO of Airmic, said the specific ethical risks of AI are not yet well understood. “Additional attention could be spent understanding them,” she said.

Hoe-Yeong Loke, head of research at Airmic, further, explained: “There is a sense among our members that ‘either you are ethical or you are not’ – that it may not always be practical or desirable to separate AI ethical risks from all other ethical risks faced by the organisation.”

“What this calls for is more debate on how AI ethical risks are managed,” he said, adding that organisations should carefully consider the implications of “potentially overlapping risk management and governance structures”.

Back to top button