ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI and corporate governance: managing the risks to business

Linked InTwitterFacebook

AI will be central to many business’ strategies as they enter 2024. The opportunities, especially in enhanced customer experience but also in predictive analytics and process automation, are very attractive.

 

However, AI comes with risks. As well as it sometimes failing to provide the anticipated returns, businesses can often experience compliance failures, damage to efficient operations and risks to employee wellbeing. All can lead to reputational damage as well as a hit on profits.

 

Organisations should not use these risks as an excuse to avoid AI or forbid its use by workers: not investing in AI is also a risk. But they should accept that AI does need strong governance and robust planning. To help with this, organisations can consult the recently published international standard on the management of AI systems, ISO/IEC 42001: 2023.

 

Investment failures

 

Investment failures are simple to understand: the investment in AI hasn’t delivered what it (or perhaps what a slick salesman) promised. This can be for a number of reasons:

  • There were unrealistic expectations about what was possible 
  • Using AI simply wasn’t appropriate – a sledgehammer to crack a nut, perhaps 
  • The AI system is too expensive to deliver a positive RoI, requiring new computer equipment and a lot of energy to run
  • Employees don’t have the confidence or the skills to use the AI system appropriately 
  • The AI system used is inappropriate in some way – perhaps the wrong type of statistical processing (the wrong algorithm) has been proposed, or the data used to train and develop the AI system is inadequate in some way

Where an investment failure happens, leaders should support the team responsible for AI in developing an alternative solution, not necessarily using AI, without inappropriately allocating blame.

 

Compliance failures

 

Compliance failures are another major worry for organisations, driven as often as not by concerns about privacy regulations and the UK’s GDPR. AI can cause data compliance failures but there is also the possibility of compliance issues in other areas, including equality laws, marketing rules, industry regulations and fair trading.

 

Compliance is a problematic area for AI because there are numerous government initiatives around the world that involve the regulation of AI, many of which are still under development. However, detailed requirements outlined in the EU’s AI Act and China’s Interim Measures for the Management of Generative Artificial Intelligence Services provide useful structures about what is likely to be required.

 

A compliance failure can lead to financial penalties but often worse is the reputational damage that also happens, and the cost of making repairs to systems and processes to prevent any future failures.

 

Operational risks

 

Operational risks are a central concern. Using AI could have several operational implications. For example, because AI is highly technical, and not well understood by many people, there is a danger that any organisation using it for a critical process could be tied into a supplier.

 

Another operational concern is that the use of AI may result in legal problems for an organisation, for example disputes over intellectual property or contracts. If data is, say, being used to build an AI system, the organisation should be confident that it has a licence to use the data for that purpose.

 

Cyber-security risks

 

Of particular concern is information security, or cyber-security risks. AI can be treated as an IT system when it comes to cyber-security (with a need to implement measures to ensure the confidentiality, integrity and availability of the data in the system). Getting cyber-hygiene right is fundamental to the protection of any AI system. In addition, there are some dangers to AI systems that go beyond existing cyber-security best practice. These include: 

  • Prompt injection, which involves giving a generative AI system some instructions (or “prompts”) that result in outputs the system was designed to avoid delivering, such as the provision of harmful information
  • Evasion attacks, which involve people deliberately altering AI inputs, for example, an autonomous driving system might be confused by people altering road signs with small amounts of paint
  • Data poisoning, which can happen when malicious actors publish misleading data in a data source (such as the internet) used for training by AI, for example images of dogs that have deliberately been mislabelled as cats

Data risks

 

Another important operational risk comes from the use of data. Data is central to AI and so data governance is key to managing uncertainty and risk. Data risks include poor data quality. Inaccurate, incomplete, out-of-date, or biased data can lead to biased AI outcomes, affecting decision-making and fairness.

 

Organisations should therefore implement robust data governance practices to ensure the quality, integrity and privacy of data used for training and deploying AI models. These include: the use of clear policies for data handling; robust processes to ensure data is accurate, complete and consistent; privacy by design measures including data minimisation and anonymisation; access controls; and regular audits and monitoring.

 

Output risks

 

Once an AI model has been put into operation, there needs to be a process of monitoring it and, if necessary, retraining it so that the quality of outputs can be maintained. The intention should be to measure whether the system is being used responsibly – delivering fair, accurate and safe outputs that preserve people’s agency and privacy.

 

People management risks

 

People management issues are also important. The use of AI could cause anxiety among workers who fear losing their jobs or who don’t believe their technical skills will be sufficient for them to use AI-enhanced systems. There may well be large-scale effects on some jobs, perhaps including mass redundancies. Organisations may well benefit from ensuring such workers have access to continuous training and are therefore able to move to different roles (which would otherwise be hard to fill) should their current role be automated.

 

Another issue may be the loss of skills or knowledge when AI takes over a process such as coding a website. Should the AI fail for some reason, the organisation may find difficulty reverting to the previous way of doing things. One difficulty is the temptation to automate entry-level jobs as this may mean a future shortage of people with the necessary skills at more senior levels in the organisation.

 

Managing AI risks for organisational success

 

Leaders must engage with the wide range of risks associated with AI if they are to guide their organisation safely towards its trusted and responsible use. And, on a positive note, organisations are unlikely to require completely new processes for dealing with AI, although they will probably need to enhance existing processes to take account of new risks generated by AI and fill the inevitable gaps.

 

However, with the right approach, involving rigour, determination, flexibility and open-mindedness, any organisation irrespective of size and sector will be able to benefit from this exciting but challenging technology.

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543