Dominic Forrest at iProov makes the case against remote video identity verification
In today’s digital era, the financial services sector is increasingly relying on video-based identity verification methods to onboard new customers to digital platforms.
Whether it’s banks, lenders, or cryptocurrency providers, the utilisation of video calls to verify identities has become a common practice for high-risk identity verification. However, recent advancements in technology have unveiled significant vulnerabilities in this seemingly secure process.
Video call verification typically entails a one-to-one video call between the user and a trained operator. During the call, the user is requested to present an identity document, which the operator then matches against their face. While this process appears straightforward, it’s proven to offer little assurance that the individual on the other end of the call is genuinely who they claim to be. This is where the threat of deepfakes enters the picture.
In 2022, researchers at the Chaos Computer Club demonstrated how deepfake technology could be used to circumvent video call verification systems. By utilising generative AI and a forged ID, they could convincingly superimpose artificial imagery onto the threat actor’s face, effectively bypassing the verification process.
The incident highlighted the susceptibility of both the technology and the human operators overseeing it to synthetic imagery attacks. The German Federal Office for Information Security has since warned against video call verification because of its vulnerability to these attacks.
The ramifications of this vulnerability can be profound, particularly within the financial services industry. If digital identity solutions cannot effectively defend against deepfake threats during onboarding and authentication processes, they become susceptible to exploitation for criminal purposes.
As cited by the US Federal Reserve Payment fraud, money laundering, and terrorist funding are just a few examples of the nefarious activities that could proliferate if deepfake vulnerabilities are not addressed.
And the dangers posed by deepfakes extend beyond the realm of financial services. In corporate environments, using deepfake technology in video conferencing calls has led to significant fraud and deception.
One recent notable case involved a finance worker in Hong Kong who fell victim to a deepfake scam, resulting in the transfer of over $25 million to scammers posing as his colleagues during a video conference call. The incident underscores the potential for deepfakes to undermine trust and facilitate fraudulent activities in corporate settings for extremely profitable paydays.
Unsurprisingly, and in response to these concerns, there’s a growing argument to move away from video call identity verification methods and adopt more reliable authentication techniques that incorporate automated AI matching and liveness detection, supplemented by human supervision of the machine learning process. By adopting this approach financial institutions can better safeguard against deepfake threats and enhance the integrity of their identity verification processes.
For example, organisations leveraging biometric verification technology are in a stronger position to detect and defend against these attacks than those relying solely on manual operation. Yet as is the same for all cyber-security applications, biometric technologies must constantly evolve to stay ahead of the evergreen threat of novel attacks.
Therefore, it’s important to understand that not all biometric face verification technologies are equal when it comes to threat mitigation and there are varying levels of identity assurance. This can begin with remote onboarding when a user first asserts their identity by capturing an image of a government-issued identity document and their face.
Returning users can authenticate with their face biometric, which is compared to the biometric template created at onboarding, known as facial biometrics or liveness. This can be triggered at various inflection points across the user lifecycle based on time, activity, risk threshold changes, or any other factors determined by the organisation.
What has become clear is that generative AI is advancing at such a rate that we’re seeing a constant stream of new tools enter the market. Only last week OpenAI launched a new text-to-video tool which, while impressive, makes it even easier for bad actors to generate high-quality video deepfakes, and gives them greater flexibility to create videos that could be used for nefarious purposes.
While it’s widely accepted that it’s almost impossible to detect synthetic imagery with the human eye, this doesn’t mean we can’t secure trust online through imagery-based methods. The onus is on organisations to develop or adopt tools that utilise AI-based innovation to mitigate the risk of attacks like the one in Hong Kong.
Further complicating this already challenging situation is the rapid increase in the number of threat groups collaborating each and every day on best practices to undermine remote identification systems. In this recent threat intelligence report our analysts identified that 47% of these groups were created in 2023, making an implied increase in the number of groups of over 90% through 2023.
As the prevalence of deepfake technology continues to grow, organisations must take proactive measures to address these threats. Beyond financial services, the implications of deepfake vulnerabilities extend to all sectors that rely on video-based authentication methods.
By raising awareness of the risks associated with deepfakes and implementing robust security measures, we can mitigate the potential impact of synthetic media on identity verification processes and safeguard against fraudulent activities.
Dominic Forrest is CTO at iProov
Main image courtesy of iStockPhoto.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543