A check-out from the AI race or a reasonable futureproofing of some civil liberties?
Technology, both at its most benign and abhorrently pernicious, is a battlefield. Cybersecurity professionals play catchup with online crime, big tech plays who-blinks-first with nation states, not to mention the international wrangle that the reshaping of global trade into a multipolar playground involves.
In some of these races the power balance between parties is more balanced than in others. Cybersecurity experts, for example, see they fight as an endless race without a fighting chance and regulation against the social hazards and overreach of technology follows a similar pattern.
Precautions taken against the AI siren song
GenAI, the most revolutionary technology we’ve seen since the rise of the internet, seems to follow a somewhat different playbook, though than the usual wild west – collateral damage – regulation cycle.
In May 2023, six months after the launch of ChatGPT, a one sentence statement released by the Center for AI Safety maintained that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
Signatories of the statement included Sam Altman, chief executive of OpenAI and Demis Hassabis, chief executive of Google DeepMind; as well as two of the godfathers of neural networks – the underlying technology of GenAI.
For Sam Altman, regulation keeps being front of mind as he implores lawmakers to regulate AI in a senate hearing or envisions the UAE as a regulatory sandbox for AI.
Meanwhile, in the EU, widely regarded as a regulatory global powerhouse, policymakers reached an agreement on the final text of the Artificial Intelligence Act On 2 February this year.
Debate about and annoyance from both sides with the draft legislation suggest that the EU is trying to strike a very delicate balance in defining what’s not allowed and what’s low-risk when it comes to AI applications.
Taking a look at the list of outright bans, one has mixed feelings. After seeing the Nosedive episode of Black Mirror, probably few in the western world will shed any tears because of the social scoring ban. It’s also easy to see why subliminal manipulation, although a greyer area and a potential tool for marketing in its milder forms, has ended up on the forbidden list.
What is more surprising and contentious is 26c on “AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”, which bans the marketing and the use of these tools in education and the workplace.
These two contexts have been singled out due to the asymmetry of the predominant relationships in these contexts between educators and students or employers and workers.
Although these solutions are often marketed as tools enhancing the users’ well-being, there is also a strong surveillance potential in them, and can be surreptitiously or intrusively imposed on students and employees from the top down.
How reliable is emotion AI in its current form
26c also argues that banning emotion AI in these two environments is due to concerns regarding the scientific basis of AI systems aiming to identify or infer emotions. Cutting-edge emotion AI applications are now multi-modal and can detect the slightest micro-expression or distinguish a genuine from a fake smile.
However, experts point out that its commercial applications are based on the Basic Emotion Theory of Paul Ekman, an American psychologist, that maintains that the seven basic emotions (anger, disgust, fear, happiness, sadness, and surprise) are cross-culturally valid, which may result in racial or even gender biases coded into emotion AI algorithms.
As Michel Valstar, the CEO of BlueSkeye, a Nottingham University spin-off points out in an article, not allowing a mostly low-risk use case such as the “training of a doctor to be more empathetic or have better bedside manners” may be an unintended consequence of the original umbrella ban.
There may be other beneficial use cases too that fall through the cracks into the category of banned applications. However, chatbots and recognition systems outside the office and the school are assigned to 2 storeys lower in the EU risk pyramid as limited risk applications, where humans need to be informed that they interact with AI.
More questions than answers
Although Members of the competent Council of Ministers’ Permanent Representatives Committee (Coreper) voted to approve the AI Act at a meeting on 2 February, there are a couple of more votes before the proposal is put to a vote to all MEPS in a plenary session and then the Council formally votes to adopt it.
Meanwhile, questions abound. Not just about how the AI Act is going to be implemented by member states but also regarding its effect on innovation and the EU’s competitiveness in the field of AI.
We also have to wait and see whether the first comprehensive regulation of AI can become a global template, and if it won’t, how can a global digital economy operate with AI regulations of varied stringency?
Will companies comply with the strictest rules to avoid unnecessary hassle or have different AI policies for different regions? Are we to expect a new race to the bottom to lax jurisdictions?
Where does the responsible AI concept of Sam Altman or Denis Hassabis stand in relation to the EU’s AI Act or to those developers whose credo remains to be “move fast and break things!?”
As for consumers in western democracies, will they warm up at scale to virtual companions that can be upgraded into thoughtful romantic or even sexual partners thanks to their emotion recognition capabilities?
Are rights groups eventually going to come to terms with the integration of attention tracking AI tools into Zoom or other video-conferencing platforms that some of them protested so vehemently against only two years ago?
Or would we like to see our kids attend trainings as graduates to learn with well-trained smiles to win in the race with recruitment screening algorithms like South Korean students already do?
“We’ll see” is likely to be among the most frequently used expressions in 2024 along with Chat GPT and generative AI.
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543