ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Protecting consumers from deepfake scams

Linked InTwitterFacebook

Kelvin Chaffer of Lifecycle Software explains why telcos must combat the growing rise of sophisticated impersonation scams

 

Artificial Intelligence (AI) is pivotal in revealing practical applications that are revolutionising many industries throughout the global economy.

 

From auto-restoring infrastructure and completely overhauled and contactless customer service experience to large-scale hyper-customisation and automated generation of marketing messages and graphics using ChatGPT, this has become today’s reality.

 

Notably, these AI solutions have the potential to significantly enhance and, in some cases, drastically surpass conventional business roles.

 

However, despite its positive use cases, AI also fuels cyber-criminal activities. In fact, Ofcom’s 2023 report serves as a grim reminder of the growing threat of customer-directed fraud, particularly with the rise in AI. At a staggering 51%, impersonation fraud is the most common type of customer-facing fraud.

 

Worse, over a fifth of victims don’t realise they’ve been scammed until 24 hours later, with over $8.8 billion lost to scams in 2022.

 

As customer-directed fraud becomes more sophisticated and damaging, subscribers may question telcos’ ability to keep them safe. Despite being a highly regulated industry with strict telecom requirements for consumer protection, communication service providers struggle to keep up with AI-led fraud.

 

With AI making it a race against time between threat actors and telcos, it’s more important than ever for operators to put their customers’ safety first.

 

The ins and outs of AI impersonation and voice scams

AI’s greatest risk lies in its ability to blur the lines between reality and illusion, offering cyber criminals an affordable and efficient technology for spreading misinformation. Previously, one of the most sophisticated tactics was a four-word phone scam where a threat actor would call you and ask, “Can you hear me?” in hopes of recording your simple “yes” answer. This recording can then be used to authorise big purchases or access your online accounts.

 

Nowadays, with the use of AI, phone scams are getting much more intricate and realistic, going beyond a simple one-word recording of your voice. It’s not about the scammers’ acting abilities or a certain degree of naivety from the victims anymore - AI has grown to handle the heavy lifting. Widely used AI audio platforms like Murf, Resemble, and ElevenLabs enable users to generate authentic-sounding voices through text-to-speech technology.

 

Scammers take advantage of easy access to these services—most offer free trials and don’t need advanced technical knowledge. A scammer can feed an audio file of a person’s voice into these platforms, and the AI constructs a vocal model. Scammers can achieve a 95% match of the original voice with just a short audio clip. Type out any desired script, and the AI voice will instantly recite it.

 

For example, an elderly couple in the US nearly lost all of their savings when they received a distraught phone call apparently from their grandson asking for money,  which appears to have been a scam.

 

In addition, the famous emails of ‘This is your CEO, and I have an important task for you”, usually sent to fresh starters at a company, have now evolved into deepfake calls where an attacker clones the voice of a manager and has a normal conversation with the employees. With most people not having a good muscle memory to challenge the deepfakes, inevitably, they will fall victim to these scams.

 

While organisations and the general public are encouraged to take caution and implement, perhaps, a challenge phrase in the conversation to determine whether the person calling is indeed a loved one or a colleague, more customers turn their gaze towards telcos and what they can do to protect them from the AI threat.

 

Meeting customers’ expectations of safety

To beat the scammers at their own game, telcos must use AI, too. Recently, researchers in Australia developed a chatbot that can mimic humans to confuse scammers and keep them on the line for a long time to make their calls economically unviable.

 

Another software that can detect deepfakes is a spectrogram, which generates a visual depiction of the audio. While detecting a voice impersonation might seem impossible when merely listening to the call, voices can be differentiated when their spectrograms are compared side by side.

 

However, while solutions like these are still in the R&D phase, other methods can include utilising machine learning to create sophisticated supervised and unsupervised models with historical data, which helps telcos categorise calls and SMS messages, thereby identifying irregularities in real-time with an impressive 98.5% accuracy.

 

These technologies have the potential to establish predictive models and conduct simulations, thus evaluating the vulnerability of potential victims to scam attacks. Using real-time decision-making capabilities, AI-powered tools can oversee each network subscriber’s behaviour and immediately respond to any exhibited fraudulent activities. This allows for an instant network suspension, mitigating revenue loss.

 

In addition, telcos can analyse customer behaviour and proactively recognise irregular patterns that might suggest a compromised device control. Customers should then receive a call from the operator to confirm account authenticity, providing another security level.

 

Armed with these tools, telecom companies can swiftly locate and neutralise threats, optimise workloads, enhance efficiency, and ultimately curtail infrastructure and operations costs, all while improving customer experience. Finally, in a preventive measure, telecom operators should regularly inform their customers about the rise in scam calls and provide guidance on preventing becoming a victim. This strategy not only enhances the customer experience but also mitigates their potential losses.

 

Achieving advanced AI maturity isn’t a simple feat, however, it’s within the grasp of telecom companies. Faced with significant challenges, these companies could find their path to growth and revitalisation through automation and AI deployment across their telecom BSS, transforming into fundamentally AI-centric organisations.

 

With the rise of impersonation scams and deepfakes, customer expectations are dramatically increasing at the same time, which means telcos must deploy more sophisticated AI techniques to help combat fraud and protect their subscribers.

 


 

Kelvin Chaffer is CEO of Lifecycle Software

 

Main image courtesy of iStockPhoto.com

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543