ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Deepfakes: destroying trust in information

Linked InTwitterFacebook

Tim Callan at Sectigo explores the problem of deepfakes: a threat to truth, trust and identity which goes beyond election disinformation

 

The recent election season has served as a stark reminder of the potential dangers posed by deepfakes – artificially synthesised content that can make someone appear to say or do things they never did. A recent TV investigation found that 85% of those shown Labour-biased manipulated content voted Labour, while 100% of those shown Conservative-biased manipulated content chose Conservative.

 

While the potential for manipulating voters with deepfakes is a serious and valid cause for concern, focusing on this election cycle is merely the tip of the iceberg. This technology presents a significant threat to the very foundation of our digital world: trust in the authenticity of information.

 

The democratisation of deception

Gone are the days when deepfakes were the exclusive domain of Hollywood special effects studios. Today, with readily available software and online tutorials, anyone with a basic understanding of computers can create a convincing deepfake.

 

This democratisation of deception is particularly troubling. Malicious actors can now target individuals and groups with personalised deepfakes designed to damage reputations, sow discord, or even extort money. The recent case involving high-profile UK politicians targeted with sexually suggestive deepfakes ahead of the election exemplifies this worrying trend.

 

Beyond elections: a broader threat

The potential impact on elections is undeniable, but the threat of deepfakes extends far beyond the political sphere. Imagine a deepfake video of a CEO announcing a bankruptcy, or a fabricated news report causing a stock market crash. Recently an experiment proved that AI can produce convincing deep fakes capable of bypassing voice recognition for online banking.

 

Deepfakes can be used to create fake news articles, manipulate social media narratives, and even doctor legal documents. A fabricated news report shared on social media can quickly snowball into a national crisis, while a deepfake used to alter a legal document could have devastating consequences. Deepfakes really can erode trust in everything from news media to financial statements, leading to financial and social chaos.

 

A revolution in how we engage with media

Before addressing the technological solutions to deepfakes, it’s crucial to have a paradigm shift in this new reality of how we consume media. We can no longer take things at face value -a healthy dose of scepticism is essential.

 

We need to be more critical of the information we encounter, questioning the source, looking for inconsistencies, and being wary of anything that seems too good, or too bad, to be true. This media literacy education is a crucial defence against deepfakes.

 

It’s completely viable that we begin seeing educational programs for all ages that teach critical thinking skills, source evaluation techniques, and the importance of reverse image search tools. Equipping the public with these skills will empower them to be discerning consumers of information.

 

Technological solutions on the horizon

Even through this consumer education, relying solely on individual vigilance is not enough. Technological solutions are urgently needed to combat the deepfake problem. Tech can almost always be combated with tech.

 

One promising solution is the implementation of encrypted timestamps within recording devices. Imagine a tamper-proof watermark embedded within a photo or video at the time of capture. This timestamp would act as a digital fingerprint, providing a verifiable record of a file’s authenticity.

 

High-end cameras are already incorporating such technology, but its widespread adoption across smartphones and other consumer devices is essential. By making encrypted timestamps ubiquitous, we can create a verifiable record of digital media, making it much harder for deepfakes to go undetected.

 

AI’s dual role

AI, the very technology powering deepfakes, can also be used to fight them. Businesses, financial institutions and modern fintech companies in particular, should embrace AI’s potential to analyse vast amounts of data and detect anomalies in speech patterns that might indicate voice authentication or facial recognition tampering.

 

The machine-learning aspect of AI allows it to improve its detection capabilities over time, leading to increased accuracy and fewer false positives. Additionally, AI’s real-time functionality enables it to make crucial judgments in the moment, potentially preventing scams before they occur.

 

The road to a more trusted digital future

The fight against deepfakes requires a multi-pronged approach. Educating the public on how to identify deepfakes, developing robust detection technologies, and implementing legal frameworks to hold deepfake creators accountable are all crucial steps.

 

The future holds both challenges and opportunities. Deepfakes pose a significant threat, but they can also be a catalyst for innovation. By working together, we can develop a more secure and trustworthy digital ecosystem, where authenticity is paramount and the lines between truth and fiction are not easily blurred. 

 


 

Tim Callan is Chief Experience Officer at Sectigo

 

Main image courtesy of iStockPhoto.com and hamzaturkkol

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543