ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI: balancing innovation and regulation

Keith Fenner at Diligent asks: How can UK businesses prepare for the EU AI Act? 

 

Europe’s AI revolution is here, and with it, a web of global regulations is expected to ensure responsible and ethical development. For business leaders, the question isn’t "if" to comply, but "how" to do so while still extracting maximum value from this transformative technology. 

 

The EU AI Act (AIA) will apply a risk-based approach. This means that AI systems will be regulated based on their potential to cause harm to people. AI systems classified as posing a limited risk will have transparency requirements and those deemed high-risk will have additional governance to ensure regulatory compliance.

 

There are also requirements for general-purpose AI (GPAI) systems and prohibited AI systems, such as those offering social scoring based on personal characteristics or behaviour. 

 

Applicability for UK businesses

The AIA will apply to providers, importers, distributors, and businesses deploying AI systems. UK organisations fall into the scope of the AIA because of the Act’s extraterritorial applicability. Much like the EU’s GDPR, if the AI system affects people residing in the EU, then the AIA will apply to businesses regardless of their location. 

 

While it aims to ensure the safe and ethical development and deployment of AI, it also imposes significant responsibilities on organisations. The potential for hefty fines – up to €35 million or 7% of global turnover for breaches – underscores the importance of becoming and remaining compliant. 

 

But compliance is just the tip of the iceberg. To truly thrive in this new era, UK business leaders need to reimagine their approach to AI. This means finding the right balance between innovation and regulation, which is crucial to harness AI’s potential without jeopardising safety and ethics. 

 

Demystifying the European AI landscape 

There’s no denying that AI brings tremendous benefits across businesses and numerous industries, resulting in breakthroughs in manufacturing, healthcare, education, and other areas.

 

For instance, easy access to generative AI tools like chatbots and image generators can help to simplify and automate mundane, time intensive day-to-day tasks. In turn, this can allow people to focus more on decision making and critical work resulting in a boost in productivity and efficiency.  

 

However, the benefits created by AI can also be a double-edged sword. The technology can be used for nefarious purposes such as being leveraged by cyber-criminals, to create deep fake images and videos or even weaponised to aid in geopolitical endeavours. If the waters weren’t murky enough, the World Economic Forum’s 2024 Risk Report highlighted the adverse outcomes of AI as a key risk this year. 

 

This is where the EU AI Act comes in. Not only does it establish rules for the technology, but also for its use in scenarios where AI could have dire consequences to society. Earlier this month, the AIA was finalised and endorsed by all 27 EU Member States and is now on track to be approved in April. Following a debate on AI practices, it may enter into force later this year. 

 

Ready, set, comply 

Now is the time for UK businesses and boards to begin preparing for the AIA’s implementation. But what does this involve? 

 

Businesses should start by building and implementing an AI governance strategy. This is followed by conducting an assessment to map, classify and categorise the AI systems that they use or are under development based on the risk levels in the AIA. If they do have high risk systems, they will need to perform a conformity assessment to determine whether the AIA’s requirements have been addressed before placing the AI system onto the EU market.

 

As part of this process, the board should be mindful of implementing appropriate safeguards and informing stakeholders and investors of the AI systems being used.  

 

Once businesses have categorised the AI systems, GRC professionals will need to perform gap assessments between current policies and the new requirements, and whether they can apply regulations they’re currently tracking on privacy, security, and risk to AI as well.

 

They will also need to establish a strong governance framework, whether developing AI systems in-house or adopting third-party AI solutions, with top-down buy-in by the board, senior leadership, and other stakeholders, including data protection officers. 

 

To ensure the board and leadership are consistently making responsible decisions on AI for their organisation, they should consider external education or certifications on governing AI. A training or certification programme that focuses on helping senior decision makers navigate ethical and technological issues inherent to AI, will ensure the organisation embarks on trustworthy practices.

 

As the EU AI Act and other AI policies continue to evolve, GRC professionals will need to keep informed about standards and guidelines to ensure their business continues to remain compliant. 

 

Supercharging the board and GRC with AI 

While the EU AI Act introduces regulations, it also opens doors for boards and GRC professionals to harness the power and opportunity of AI while staying compliant and responsible. 

 

AI tools can be used to automate tasks like risk assessments and documentation, analyse swathes of datasets to identify potential biases or vulnerabilities in systems.

 

In fact, Diligent’s research found that the appetite for AI is certainly high as 76% of EMEA senior business leaders are currently using or planning to use data analytics, visualisation, or AI to support leadership decision making processes. And 61% believe investment in technology, data or AI has improved the processes by which business leaders in the organisation arrive at decisions. 

 

However, it’s important that businesses are practical about how AI can be leveraged. Many organisations hope AI will manage risks by solving complex data problems and we found that 48% claim that AI will automate decision making. But, unless organisations solve their data management issues, no amount of AI is going to help businesses make better GRC decisions. 

 

Ultimately, it’s important to remember that AI is not just a compliance requirement, it’s a strategic opportunity. By embracing responsible AI practices through a risk management-focused lens and leveraging its power strategically, boards and GRC professionals can thrive in the new regulatory environment and unlock significant competitive advantages. 

 


 

Keith Fenner is SVP and GM EMEA at Diligent 

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543