ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Making the case for (and against) AI regulation

Luke Dash at ISMS.online questions how and whether governments should be regulating AI

 

The debate as to whether AI is friend or foe continues to ebb and flow.

 

By this, I don’t mean the movie depictions of robots taking over the world. In real-world terms, the fear has largely centred around the prospect of AI causing significant labour market disruption that could threaten and displace people’s livelihoods.

 

In more recent times, however, AI concerns beyond potential job losses have begun to move into sharper focus. With generative AI having now entered the mainstream, worries have also started to emerge as to how algorithms are developed, trained and managed.

 

Notably, several ethical concerns have arisen relating to fairness, privacy and accountability that demand thoughtful consideration:  

  • Bias and fairness: First, there’s a risk that AI systems trained using historical data may inherit and amplify existing biases, leading to unfair outcomes. Here, Amazon’s AI recruitment tool which was scrapped due to gender bias stands as a prime example, yet the fear is that impacts may also extend into areas such as criminal justice and lending.
  • Privacy: AI’s dependency on datasets also poses problems from a privacy perspective. From unauthorised data collection and inference of sensitivity details to the risk of identification from anonymised data, AI has the potential to cause several challenges relating to personal and sensitive data.
  • Copyright: Copyright is also a significant concern given AI models are often trained using expansive datasets. Should these algorithms be asked to generate new content, they may inadvertently incorporate copyrighted materials, resulting in potential legal liabilities for businesses.
  • Legal responsibilities: Similarly, it can be unclear who should be responsible in the case that an AI system leads to damages or causes harm, with these systems creating a grey area in terms of legal responsibilities.

 

Differing approaches: EU vs US

Naturally, such concerns are fuelling significant public debate about the balance between AI’s benefits versus the risks, with governments now stepping in to try and find ways to better manage the potential challenges.

 

Several regulatory frameworks have already begun to emerge, with the US and EU taking distinctly different approaches.

 

In the EU, we have the EU AI Act which currently stands as the most comprehensive and progressed form of legislation. Critically, much of its emphasis revolves around the protection of individual rights and fairness, taking an approach that aims to establish key safeguards while making AI applications safer and more trustworthy.

 

The US, on the other hand, appears to be taking a more flexible, decentralised approach to AI regulation. While the proposed Frontier AI Act aims to establish consistent standards for safety, security, and transparency across the U.S., there is also room for state-level adaptations to be made as needed.

 

Here, a prime example can be seen with California’s proposed SB 1407 bill, which will require large AI companies to rigorously test their systems prior to public release, make their safety protocols publicly available, and give the state’s Attorney General the authority to sue developers for any significant harm caused by their systems.

 

There is no easy answer as to which of these approaches is right or wrong. However, it is already clear that the motivations and goals are perhaps slightly different.

 

One of the key benefits of the EU’s approach is the fact that its Act provides a unified framework, offering clear guidelines for companies operating across member countries that set high standards in relation to system safety and consumer protection.

 

Its focus on rights and fairness may help to build trust in AI systems across Europe. However, the counter argument is that the stringent demands of these regulations may potentially dissuade companies from pursuing AI development in the region, with compliance being too complicated or burdensome.

 

In the eyes of some, this impact is already beginning to surface, with both Apple and Meta declining to sign the EU’s AI Pact, with the former of these firms having announced in June 2024 that it would be delaying the release of three new AI features in Europe, citing “regulatory uncertainties”.

 

Here, the US perhaps has the upper hand as a market that is seen to be taking a more flexible approach to AI legislation, leaving room for state-level adaptation. Yet again, this approach has its critics.

 

First, by conversely prioritising innovation, privacy concerns regarding AI systems remain more significant in the US market. And second, the potential for a series of disparate and conflicting state and federal standards may add significant complexity for enterprises operating across multiple states.

 

ISO 42001: a cohesive path forward

It’s clear that the key challenge for government in regulating AI lies in striking the right balance between prioritising public safety and addressing growing ethical AI concerns, without impeding continual technological progress or making it difficult for enterprises to achieve compliance.

 

This won’t be an easy task for policymakers, and it is likely that we will continue to see adaptations and iterations of key frameworks evolving on a regular basis. However, I do believe that there is an opportunity for companies and regulators alike to leverage ready-made, recognised international standards.

 

Enter ISO 42001 – a standard that provides key guidelines for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS).

 

Critically, this framework is built on the principle that responsible AI doesn’t have to be a barrier to innovation or success. Instead, it argues that by making ethical considerations a priority in AI development, businesses will be able to actively address growing AI concerns, build greater trust with consumers and proactively mitigate risks.

 

For companies, it offers several key benefits. As it is already a globally recognised standard for managing AI risks, emphasising safety, transparency, and accountability, businesses can use it as a basis to align with varying regulations – be it international, or across state lines in the US.

 

For regulators, ISO 42001 can also be beneficial. By aligning with the framework’s core principles, they will be able to make compliance easier while potentially reducing the complexity of adhering to different state or country rules that will make it easier for enterprises to expand into new territories.

 

Of course, regulators will likely continue to develop their own frameworks in ways that balance the unique needs of their local societies, enterprises and economies. However, adopting cohesive and recognised standards such as ISO 42001 as central guiding frameworks may be an effective way of helping businesses to navigate this complex compliance landscape in a safe and competitive manner.

 


 

Luke Dash is CEO at ISMS.online

 

Main image courtesy of iStockPhoto.com and sarah5

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543