Confronting the ethical risks of new technology

The relationship between business and business ethics has always been a contentious one. Notable economists have suggested that ethics are at best a secondary consideration: the overriding purpose of business is to maximise profitability and shareholder returns, constrained only by adherence to the letter of the law.

But most modern businesses accept (as does the investor community) that that proposition does not now hold. Corporate social responsibility, and environmental, social and governance (ESG) issues are increasingly a fixture for businesses of any size and inform the actions and standards by which those businesses operate.

At the same time, the breadth of ethical considerations expands – sustainability, diversity, modern slavery and human rights, consumer protection, climate change and beyond. As businesses and governments hew a path through these challenges, it must also include solutions in respect of the tools of technological advancement, and in particular data analytics and artificial intelligence (AI).

Clyde & Co was recently involved in taking a survey of more than 100 GCs and c-suite executives from a wide range of multinational organisations throughout the world. The findings confirmed that those GCs and their boards believe that data analytics and AI are likely to be the technologies that will have the greatest impact on their organisations during the next few years.

But at the same time, just 31% of the GCs and 53% of the board members surveyed felt comfortable with the risks that AI present, and even less for data analytics (22% of GCs and 47% of board members).

The risks associated with AI and data analytics extend from the physical to the fiscal: the production and operational environment, first- and third-party liability considerations, and potential (and unintended) market disruption and distortion.

In addition, and perhaps crucially, they extend into the increasingly important area of potential reputational harm. The failure to bring an informed ethical perspective to the way in which business engages with new technologies, how those technologies interface with that business’s workforce and customer base, and how the technology may impact or disrupt are very real considerations.

In the modern trading environment, maintaining customer trust and confidence and conducting business (including the use of AI and data analytics) in a manner that maintains the integrity of the business’s brand and reputation, is fundamental.

In this context, it would be wrong to assume that established codes of ethics (and indeed the associated law and regulation) are fit to meet the challenges associated with the advance of AI and data analytics across businesses.

AI offers capabilities that have previously been the exclusive preserve of humans. And the range of potential ethical factors confronting businesses is vast – from the unintended societal harms of imperfect AI systems and reliance on inadequate datasets, to hard-to-explain outcomes derived from the wholesale use of ‘black box’ AI systems.

Global rulemakers are already starting to put pen to paper. Earlier this month, the High Level Expert Group on Artificial Intelligence, a group established by the European Commission, published guidance on the ethical use of AI which, among other things, warns that algorithms must not discriminate on grounds of age, race or gender.

The group published seven key requirements to create “trustworthy” AI systems, including the need for human agency and oversight, and added that the burgeoning industry must comply with existing laws on anti-discrimination, privacy, consumer and environmental protection. For organisations making use of AI, the regulatory roadmap is set to become ever-more complex.

These and similar considerations present a material challenge for businesses, within and without regulated sectors. Legislators and regulators are duty-bound to establish frameworks within which technological advancement can thrive safely. Where does the balance lie between regulation that provides adequate societal safeguards and that which stymies innovation and efficiency?

Any business looking to manage its risks (reputational or otherwise) should give very careful consideration to the creation of ethical governance. By getting on the front foot in this way, the business community will be better placed to influence the creation of the law and regulation.

Contributed by Simon Konsta, senior partner, Clyde & Co

Back to top button