Thanh Lanh-Connolly and Brett Lambe, from UK law firm Ashfords’ technology team, explore the proposed use of AI in gauging human emotions and business decision-making, and its potential impact
Generative AI (or GenAI) has taken the world by storm over the last 18 months, and sparked discussions on the application of the technology in various fields. GenAI was one of the World Economic Forum’s key topics this year, and there was much expectation for AI-powered gadgets at the CES.
Although GenAI is being applied to many areas of business, in this article we will be looking at the proposed use of AI in gauging human emotions on the decision-making process in business and investment, and the potential impact of that on a wider scale.
Taking the investment market as one example, last year the FT reported that investors sought to form investment opinions, by combining traditional financial information with results that AI derived from speech analysis, which picked up unspoken information and cues from executives’ funding and investment pitches.
Speech analysis software seeks to recognise patterns and signals from recordings of speech, such as word choice, gaps, hesitation and tone to give insights into the speaker’s emotions. This technology is not a new concept. It belongs to a larger field of study called Emotion AI (or Affective Computing) which focuses on processing, interpreting and simulating human emotions by analysing speech, facial expression, body language etc. Thought to have been first introduced in 1995, Emotion AI has been applied in various industries, such as advertisement, recruitment, call centres, health and life sciences.
A helping hand
It is believed that by processing extremely large volumes of data, AI will be able to recognise behavioural and psychology patterns that are not easily recognisable by humans, and then provide the analysis quickly.
AI has been credited with tracking and detecting early signs of mental health issues with high accuracy, such as depression, schizophrenia or suicidal behaviours. The technology has also been used as part of personalised therapy for patients, encouraging more in-depth sharing as some patients may feel more comfortable speaking to a chatbot than to a human therapist. Having better insights into patients’ state of mind will help improve treatment. Early intervention can save lives.
Another promising aspect of adopting Emotion AI is that it may help with the decision-making process. Human judgement is susceptible to heuristics, biases and "noise” (according to Daniel Kahneman, the Nobel-prize winner for his work in behavioural economics). By presenting data-driven results, AI should help to reduce those flaws. Such a tool could be useful to counter unconscious and institutionalised biases and discrimination.
Businesses have also used Emotion AI to build organisational emotion intelligence, improving engagement with stakeholders, customers and partners. For market sensitive industries such as customer services, recruitment or marketing, this could give them a vital competitive edge to drive sales and improve quality.
In the eye of the (AI) beholder
The appeal and potential of Emotion AI cannot be denied. Yet, cliché as it may sound, the devil is in the detail.
The belief that AI can help to reduce biases is based on a fundamental assumption that the AI model itself is not susceptible to the same biases, or open to manipulation. The quality of AI, like any software, is dependent on how it is designed, trained and used. If discrimination and bias has been embedded into its design and data, we risk perpetuating and amplifying existing discrimination and bias at scale and at speed.
Insufficient or ‘bad’ data may skew the results. Public data on an individual’s speech, expression or mannerism is much more limited than those used for large language models (LLMs, such as ChatGPT) and private data is likely to be specific to one AI system. Without sufficient ’good’ data, the performance of Emotion AI can be called into question.
Another issue is that at the moment AI is trained on human-generated data. With more data being created by AI (not just texts but also synthetic voices, avatars and images), the possibility is that in the future there will be more AI-generated or AI-assisted content on the internet. The quality of training outcomes from using such data is unknown, but it is safe to assume that the old tech maxim of "garbage in, garbage out" will remain as true as ever.
Putting blind faith in the trustworthiness of computer systems has the clear potential to cause unintended issues. We have seen that GenAI is a double-edged sword. When wowing the world with new content on the one hand, it has also made up false facts, and got things wildly wrong, including via a series of high-profile hallucinations.
In Mata v. Avianca, two New York lawyers got into trouble for including made-up cases generated by AI in their submission to the court. Or last year, an AI-powered search engine concluded that Australia was not real. It is always worth reminding ourselves that while an AI system can suggest a plausible outcome, it does not understand the meaning and context of the words; it is only as good as its programming, training and data set. For Emotion AI, this problem is even more pressing.
Interpretation of emotions depends on variables such as cultural, demographic, gender, and social background. For example, it has been well established historically that lower-pitched voice tone is generally perceived to be more authoritative, confident and trustworthy. This perception will generally work against women or softly-spoken men.
An AI system designed or trained on this perception may indicate a person “weak, indecisive and negative” from an interview; but in reality, they may have spoken quietly on account of their gender, or because they were speaking their second language, or suffered from speech impairment. If a person were to lose an employment opportunity because of this flawed AI, this would lead to a range of legal issues, such as breach of the Equality Act 2010.
Data privacy and security is another obvious concern. Emotion AI is more likely to process biometric data which enjoys more stringent protection as special category data under data legislation across the UK and Europe (including the GDPR and UK equivalent). Yet certain individual rights such as the right of access or deletion may become impractical in more advanced AI models using neural networks. This is because locating one snippet of personal data and deleting it from the network is fundamentally impossible – the technology does not work that way.
The use of AI at a larger scale, especially in the public sector, need to go through extra scrutiny as the consequences could be disastrous if authorities get things wrong. We have seen human life being wrecked because “the computer said so” in a number of high profile cases in the UK, the Netherlands and Australia.
While a system can flag patterns and discrepancies, correlation is not causation. A person may sound aggressive, or look uncomfortable, not because that person is violent or guilty, but because they were distressed; or worse yet, because the system was trained on a specific ethnicity and misinterpreted the signals from a person of another background.
The nature of GenAI including its opacity makes it challenging to explain how and what element makes the system suggests certain outcomes, where things go wrong, and which party is responsible for those outcomes. This could lead to a failure to achieve justice.
With GenAI in particular, the "black box" nature of the technology will lead to a reduction in the ability to scrutinise its workings and uncover how and why something went wrong. With AI, as the old saying goes, success has many fathers, but failure is an orphan.
Are we there yet?
Many existing use cases of Emotion AI have not been properly tested and vetted to prove that they are safe for a mass roll-out. And for good reasons. Academic research has shown that the current Emotion AI technology raises significant concerns in terms of quality, adaptability and explainability.
When developing or adopting Emotion AI, it is vital to look under the bonnet to understand the technology, psychological premise, training data set and the intended purposes of the system, that it is designed, trained and deployed safely and, ultimately, is fit-for-purpose.
However, looking under the bonnet is often difficult, given the proprietary nature of the AI and a general unwillingness by the major players in the market to provide this level of access or scrutiny. Nevertheless, as boring as it sounds, organisations must always remember their legal and regulatory obligations when using technology to assist their decision making, and particularly when those decisions are about people.
Directors will also need to remember their duties to exercise reasonable care, skill and diligence and independent judgment. Analysis from AI systems should simply serve as one of the factors, but not the decisive factor, for decision making.
Having an adequate governance framework may help to mitigate the risk of improper use by ensuring transparency and imposing necessary guardrails for proper use of the technology, and ultimately, better quality of decision making.
Looking forward
It doesn’t take a fortune teller to know that AI is here to stay. Development in AI technology in general may help to advance AI sub-sets such as Emotion AI which promises novel solutions to existing problems.
It will be equally important, though, to embrace what it means to be human when using such tools. For decades, many analysts have provided forecasts of our use of technology, and set out a vision of the future, sometimes exciting and sometimes terrifying.
This remains the case with the collective use of AI. Whether the future is exciting, terrifying, or both, all depends on our decisions today.
Thanh Lanh-Connolly is a Chartered Legal Executive and Brett Lambe is a Senior Associate at Ashfords
Main image courtesy of iStockPhoto.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543