Luke Dash at ISMS.online explains that a combination of stricter legislation, enhanced technology, and comprehensive education is needed to address a rising and sophisticated threat
Not long ago, the idea of deepfakes was confined to Sci-Fi movies and slightly far-fetched TV series.
We have seen it employed on our screens as a plot device, with villains using the technology to impersonate world leaders or public figures, manipulate events through fabricated audio or video evidence, and sow chaos and mistrust on a global scale.
However, fast-forward to 2024, and this is becoming all too real.
Take the UK general election campaign. Several high-profile deepfake cases have come to light, with politicians including Wes Streeting and Nigel Farage among those targeted.
Indeed, deepfakes now pose a tangible threat, enabling the spread of misinformation, defamation, and even political manipulation on an unprecedented scale. As techniques continue to evolve, distinguishing deepfakes from authentic content grows increasingly challenging, making this a potent and concerning phenomenon in our digital age.
1 in 3 businesses impacted
So, what are deepfakes, and how big of a problem are they becoming?
In simple terms, deepfakes are synthetic media created using advanced artificial intelligence and deep learning techniques to manipulate or generate visual and audio content.
This creates highly realistic fabrications that can depict events, situations, or individuals saying or doing things that never actually occurred. From a business perspective, it is easy to see how such a capability could be a severe security risk.
The rising tide of deepfakes prompted us at ISMS.online to explore the extent to which this is affecting organisations around the UK as part of our latest survey.
The State of Information Security report highlights some alarming trends. Already, nearly a third (32%) of UK businesses have reported experiencing a deepfake security incident in the past year, making it the second most common type of information security breach in the country.
And the list of serious, high-profile cases is growing. The most notorious to date occurred at British engineering firm Arup, where a deepfake video call impersonating a company’s chief financial officer tricked an employee into transferring HK$200 million (£20 million).
The time to act is now
There will be more instances such as this, with enterprises duped into damaging actions by deepfakes that employees may never even think of doubting.
Action is needed on multiple fronts to counter the rising tide of deepfake attempts. The first, and perhaps most challenging, centres around a shift in mindset and a comprehensive educational drive.
Awareness of deepfakes, their prevalence and effectiveness need to grow quickly – by teaching employees to identify potential deepfake content and implement robust verification processes, a culture shift within organisations can occur where colleagues become more curious and questioning.
Here, training in digital media literacy, critical thinking and fact-checking can equip individuals with the skills necessary to scrutinise information sources and detect manipulated content. Gamification, for example, emerged as a key means of improving cyber-security skills and awareness in our report, with almost three in 10 (29%) companies identifying it as one of the most effective training methods.
Legislation, meanwhile, can provide a robust framework to help organisations counter deepfakes. The next UK government should prioritise enacting laws that criminalise the malicious creation and dissemination of deepfakes, particularly those intended to deceive or cause harm. This can serve as a deterrent and enable legal action against perpetrators.
In addition, clear guidelines on labelling and disclosure requirements for synthetic media can promote transparency and empower individuals to make informed decisions. At the same time, data protection and consent regulations can safeguard against the misuse of personal data for deepfake generation.
Standards such as ISO 42001 also have an essential role to play. Companies should seek to adopt the framework, which provides guidelines for managing the risk of artificially manipulated media such as deepfakes. If adopted correctly, it should provide a structured foundation for firms to detect, respond to, and mitigate the threats posed by deepfake practices.
Promisingly, just 1% of respondents in the ’State of Information Security’ report stated that they do not, or do not need to, comply with ISO 42001, with the majority of enterprises (70%) taking nine months or less to obtain compliance.
AI and Machine learning: crucial tools
Another encouraging finding from the report concerns expenditure on cyber-security. More than 71% of respondents told us that their organisation plans to increase overall spending on IT security initiatives and resources, with 26% planning to up their budgets by more than a quarter. Meanwhile, six in 10 (61%) intend to grow their investment in recruiting and hiring cyber-security personnel.
Some of this investment will be channelled into artificial intelligence and machine learning. Indeed, despite the challenges posed by AI-driven threats such as deepfakes, there is equal potential for AI and machine learning to strengthen organisations’ security postures – according to our report, more than seven in 10 (72%) businesses agree that this will enhance their data security strategies.
This sentiment is reflected in spending, with eight in 10 (81%) enterprises committed to maintaining or increasing their spending on AI and machine learning for security applications.
As deepfakes continue to evolve and pose escalating risks, a multi-pronged approach is crucial for organisations to stay ahead of the curve.
By fostering a culture of awareness through education, leveraging the power of AI and machine learning technologies, embracing industry standards like ISO 42001 and advocating for robust legislation, businesses can fortify their defences against this emerging threat.
Investment in proactive measures today will safeguard against the detrimental consequences of deepfake manipulation, thus helping to protect reputations, operations and the integrity of information in our increasingly digitised world.
Luke Dash is CEO at ISMS.online
Main image courtesy of iStockPhoto.com and Tero Vesalainen
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543