ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

GenAI: more of a threat to identity than to jobs

David Higgins at CyberArk argues that UK office workers are more scared of deepfakes than of AI replacing their jobs

 

The UK heads to the polls on 4th July, as part of 4 billion people across 60 countries who already have, or will be, voting in elections this year. Against this backdrop, the threat of voters being influenced by AI-driven cyberattacks has substantially increased.

 

Recently, the BBC uncovered that young voters are being recommended fake AI-generated videos featuring party leaders, misinformation, and clips littered with abusive comments on TikTok – resulting in the social media company increasing their number of fact-checking experts and employing AI-labelling technology.

 

It’s an extremely concerning trend for political leaders, and celebrities are facing similar battles. Deepfake images of Taylor Swift took social media by storm earlier this year – resulting in fans all over the world calling for tighter regulation of this increasingly sophisticated tactic. While undoubtedly a massive threat for those in the public eye, this malicious tool is not only affecting those used to facing scrutiny.

 

The threat of deepfakes is also infiltrating businesses. Earlier this year, UK engineering firm Arup fell victim to a £20m deepfake scam as an employee was duped into sending cash to criminals by an AI-generated video call. 

 

The threat of AI in office environments

AI-generated deepfake attacks are worrying UK workers across the board. In fact, contrary to news headlines, UK workers worry more about deepfakes than they do about AI taking over their jobs.

 

A recent CyberArk survey found that the vast majority (81%) of UK workers are anxious about their visual likeness being stolen or used to conduct cyber-attacks, with nearly half (46%) apprehensive about their likeness being used in deepfakes – a greater proportion than those worried that artificial intelligence (AI) will replace them in their roles (37%).

 

The concern stems from their own lack of confidence in being able to tell the difference between a deepfake and reality.  Over a third of UK office workers (34%) think that that they couldn’t spot if a very convincing phone call or email from their boss is fake.

 

The concern isn’t just limited to deepfakes, but the overall potential that AI holds to aid malicious actors in cyber-attacks. CyberArk’s research also shows that 78% of British workers are concerned about cyber criminals using AI to steal confidential information through digital scams.

 

The top three cyber-scams causing anxiety include payment fraud (59%), sensitive data being collected and wrongly used (57%) and cyber-attackers being able to figure out workplace login details to access confidential information (47%).

 

UK workers also have an innate lack of trust in their organisation being able to defend against AI-generated threats. 70% aren’t confident that their IT teams and security tools are prepared to defend against this type of deceit. Similarly, nearly two-thirds of workers (63%) are not confident that their organisation is able to stop AI-driven email and phishing attacks.

 

The rise and continued uptick in quality of AI-generated deepfakes to realistically manipulate someone’s identity - be it video, audio, or image - is an extremely worrying trend, not only for celebrities and politicians but for UK PLC also. And the survey results detailed above paint a grim picture of the UK’s preparedness for this new wave of AI-generated cyber-attacks. 

 

The components of our digital identities are as much a part of who we are as a physical fingerprint is. If aspects of our digital identity are stolen or faked, the consequences can reverberate in both our personal and professional lives. Deepfaked audio and other AI-powered attacks can not only sway public opinion, they can also be a way of compromising our employers’ sensitive data and assets. 

 

What can your organisation do about this?

CISOs need to act now – or risk forever having to play catch-up with ever-evolving security threats.

 

The good news is that there are some things organisations can immediately implement. To begin with, they can identify and train external-facing employees who interact with customers and may have access to sensitive information. For example, support and services staff should be trained to ask additional questions to verify whether the external caller is a human or a deepfake.

 

Of course, it is also important to educate all employees on the risks of engaging with unverified content and discourage amending or amplifying such content.

 

As we navigate the uncharted territory of advanced AI technology including GenAI, collaboration, vigilance and proactive measures are essential to combat the threat of deepfakes.

 

Rather than blindly trust employees to spot deepfakes, businesses instead need to start to imbed robust measures for self-regulation – making sure employees and tech teams have all the right tools to be able to detect and deflect deepfake attacks.

 

And it’s incumbent on governments to take steps to prevent deepfakes from undermining our economy, and our democracy.

 


 

David Higgins is Senior Director, Field Technology Office at CyberArk

 

Main image courtesy of iStockPhoto.com and Userba011d64_201

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings