Harry Borovick at Luminance explores security and compliance in AI and how to navigate the diverging paths involved
As we move full steam ahead into a new year, there’s one trend that is sure to persist in 2024 – the continued acceleration of Artificial Intelligence (AI).
With AI now in use across industries from manufacturing and media to healthcare and law, discussions around data privacy and AI regulation are reaching fever pitch. While AI continues to develop at breakneck pace, lawmakers and regulators are playing catch-up as they attempt to agree rules and guidelines on the safe use of AI.
However, drafting appropriate legislation is no simple task given AI’s wide range of both existing and potential applications. Applying overly precautionary principles could bar the gates of AI before it reaches its potential. On the other hand, cohesive and flexible regulation might be key to driving innovation in the sectors that need it most.
Bridging the data privacy gaps
There are several aspects convoluting the regulatory process. Last year, the UK Secretary of State for Science, Innovation, and Technology took the decision to establish a UK-US data bridge through the UK extension to the EU-US Data Privacy Framework.
The data bridge ensures that the level of protection that UK individuals are afforded under GDPR is maintained, but its existence has already left businesses with questions about how it will complicate the implementation and enforcement of privacy frameworks. The added dimension of AI regulation will further complicate this regulatory process.
Meanwhile, the AI Act has been the EU’s first major attempt at legislating the technology, dividing AI systems into three categories of risk. The categories define which systems should be banned entirely (those considered a threat to people), which should be subject to tight governmental oversight (AI systems that negatively affect safety or fundamental rights), and which should be allowed to continue as before.
It’s looking likely that the AI Act won’t come into full effect until 2026, but its announcement has already made a splash in the industry, as it extends a European consensus position on societal ethics to AI technologies. With the AI Act enforced, businesses around the world looking to trade in the EU would need to comply with its regulations across all markets. This entails significant effort, particularly for Big Tech players, who have so far been the loudest voices in the AI conversation.
Regulation that goes beyond the terms of Big Tech is also underway with the recent announcement of a global Frontier Model Forum. Formed of Anthropic, Google, Microsoft, and OpenAI, the forum will act as an industry body to ensure the “safe and responsible development of frontier AI models.”
While it’s encouraging to see this kind of commitment from the world’s biggest AI players, it’s important that the Frontier Model Forum does not drown out the voices of the broader AI community and future businesses that don’t yet exist.
Navigating diverging paths
Further complicating AI regulatory progress is the stark difference between the actions coming from the EU and the US. The US has much more of a decentralised and sector-specific approach to regulation, partly due to it being home to many Big Tech players including OpenAI, Google, and Microsoft. So far, the only legislation that exists around AI is at state level, with an emphasis on incentives, rather than constraints, in an attempt to sustain the economic momentum around AI.
In the EU, however, regulatory bodies are clearly taking more comprehensive and precautionary measures, as evidenced by the aforementioned AI Act. This will no doubt complicate the compliance efforts of businesses who work across multiple markets, and General Counsel will need to decide where to base their global privacy focus.
With the cost and complexity of compliance only looking to grow this year, the pressure on legal teams will increase as the burden falls on them to navigate new and unknown regulatory environments.
To ensure that any regulatory schema is fit for the future of AI, policymakers should steer away from a one-size-fits-all approach. AI will not have the same impacts, challenges, or consequences across healthcare, finance, law, or other sectors, and has different effects depending on whether it is a more specialised or generalised tool.
Regulations should address and reflect these subtleties, which requires policymakers to work closely with a diverse range of AI innovators. A truly collaborative and open regulatory effort must be made if we’re to address the unanswered questions and make AI more beneficial and productive for us all in 2024.
Harry Borovick is General Counsel at Luminance
Main image courtesy of iStockPhoto.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543