ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Innovation and the regulation of AI

Russ Kennedy at Nasuni explains how emerging regulations are likely to impact the artificial intelligence industry

 

The news of Meta pausing its AI development in Europe due to privacy concerns raised by the Irish Data Regulator is only the latest example of potential risks surrounding AI technologies and the extent to which they should be regulated.

 

Companies know that tracking AI’s frenetic development is hard enough. Almost one third of UK workers have used AI tools according to research by Slack, while more than half (56%) of US workers have used generative AI (GenAI), according to a Conference Board survey.

 

But as the Meta case — where the regulator is concerned over the data used to train the company’s large language models (LLMs) — shows, company leaders will increasingly need to develop practical strategies to accommodate new legislation and regulation of AI.

 

Chief among new regulatory measures concerning AI are the EU AI Act (likely to be ratified by 2026), the October 2023 US Government Executive Order, and the spirit of the November 2023 UK Bletchley Declaration, at an early stage of their implementation and development of these potentially game-changing tools.

 

Given the potential security, privacy and existential fears over AI, senior executives leaders need to be thinking seriously about different geographical regions’ new laws/directives and their potential impacts on their ability to innovate. It is important to understand the broad implications of these published directives and local factors that could improve AI tools’ regulation and wider adoption.

 

AI regulation

The three declarations mentioned seek by different degrees to anticipate and contain the risks associated with AI technologies and capabilities. We can broadly categorise them in four ways: 

 

Risk management

All three prioritise risk mitigation, identifying bias from outputs, new security vulnerabilities and unintended consequences as challenges that demand corporate oversight. But they take different approaches to regulation.

 

Safe innovation

All the frameworks broadly claim that they intend to balance safety and ethics while supporting innovation. However, this aim might vary by region, since the US executive order assumes the possibility of working with the country’s own tech giants. And while these declarations generally have high-risk applications in their sights, all three accept that lower-risk AI systems also warrant transparency and broadly avoid setting hard limits on AI development itself.

 

Coordination

The US government’s executive order is focused on security, while the EU AI Act prioritises citizens’ rights. The Bletchley Park declaration’s motivation is harder to discern although it calls for collaboration between states. The UK declaration and the EU AI Act both advocate the creation of central regulators, unlike the US executive order.

 

All the proposals also emphasise the need for coordination between governments and the private sector. What’s missing here is identifying which companies could help shape this conversation.

 

Regulatory scope

While at this stage, the proposed EU AI Act looks the most detailed, implementing rules across member states, the other frameworks prefer broader principles.

 

We can start to see a general direction of travel and visualise how AI governance might operate at the global level. But what local factors should company leaders be planning for as they contemplate AI implementation?

 

Local factors

Common standards

Governments and the tech industry should promote certification, so AI tools and their foundational models meet trustworthy international standards. For example, the Institute of Electrical and Electronics Engineers’ (IEEE) Standard Association’s certification programme describes the ethics of autonomous systems.

 

Workable standards are essential to enterprises adopting any new technologies while containing AI deployments’ risks, giving users the confidence that new tools are trustworthy, secure and enterprise-ready. 

 

The AI industry will need clearer guardrails for AI tools’ development based on stronger co-operation between governments as these frameworks are translated into shared regulations for AI development. But these controls shouldn’t become a disadvantage for local AI industries and companies that follow the rules when others don’t.

 

A level field

On the other hand, if we constrain AI’s development too closely, we could fail to deliver equality of opportunity. This could lead to AI monopolies, as smaller providers and startups are regulated or priced out of innovation. While company buyers generally favour established vendors, common standards will give smaller, more nimble AI providers wider opportunities to develop new products.

 

Data in order

Underlying all these considerations is data security. Many organisations still haven’t updated their data protection and recovery posture. With wider use of data in training LLMs and the startling growth of GenAI and evolving cyber-threats, companies could be risking security and compliance headaches, with regulatory penalties likely to exceed ransom payouts.

 

While Europe still leads in data governance and regulation with GDPR, legislation such as the California Consumer Privacy Act (CCPA) is now being adopted across the US. Without investing in data protection and compliance, companies leveraging business data sets for AI services could face the possibility of penalties, ransom demands, litigation, and business disruption.

 

Trust comes first

Companies need to prepare for regulatory frameworks that permit enterprise-ready, data privacy-compliant AI tools but regulators’ overriding task will always be to provide assurances for citizens.

 

As the Meta case shows, enterprises’ needs will always come second to AI being trusted by the public, even as these tools become a fixture in our lives.

 


 

Russ Kennedy is Chief Evangelist at Nasuni

 

Main image courtesy of iStockPhoto.com and Dragon Claws

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings