ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Cyber-attacks accelerating with agentic AI

Akif Khan at Gartner asks whether information security leaders are ready for the 2027 tipping point of AI agents

 

The digital battleground is shifting, and the clock is ticking faster than ever. Gartner predicts that by 2027, the impact of AI agents on cyber-attacks will be seismic: a 50% reduction in the time it takes to exploit compromised accounts.

 

As AI agents fundamentally reshape the anatomy of cyber-attacks, security and risk management (SRM) leaders must rethink their organisation’s defences to stay ahead.

 

 

The driving force behind faster attacks

AI is accelerating the pace of cyber-attacks by automating nearly every stage of an account takeover (ATO). What was once a process that relied heavily on manual effort for credential gathering and exploitation, has now become a lightning-fast operation. AI-driven agents can scan vast amounts of data, pinpoint exposed accounts, harvest credentials, and exploit weaknesses with unprecedented speed, leaving organisations with an ever-shrinking window to respond.

 

But it’s not just about speed. The rise of AI is also changing the sophistication of attacks. Automation enables attackers to conduct highly personalised and more convincing phishing campaigns, utilising deepfake technology to mimic behaviours, and even visual identities of trusted individuals.

 

This makes traditional detection methods obsolete, as these attacks now appear more legitimate and harder to differentiate from genuine communications.

 

Furthermore, the growing complexity of "counterfeit reality” – the use of deepfake audio and video – adds a new layer to these threats. Attackers can now convincingly impersonate executives, employees, or business partners, making social engineering attempts not only more believable but harder to detect.

 

We’ve already seen the damage these counterfeit realities can cause, with attackers impersonating executives to authorise fraudulent transactions. These high-profile cases are just the tip of the iceberg. 

 

 

Countering the counterfeit threat

 In response to this rapidly evolving threat, SRM leaders need to implement layered and proactive defence strategies internally by:

  • Transitioning to passwordless, phishing-resistant MFA: Eliminate reliance on vulnerable passwords. Traditional credentials, often harvested through data breaches or phishing, remain a common entry point for attackers. Transition to multidevice passkeys and other passwordless technologies which can significantly reduce opportunities for ATO.
  • Adopting AI agent detection tools: Invest in solutions that detect and classify interactions involving AI agents across web, app, API, and voice channels. These tools can identify anomalous behaviors and flag potential threats before significant damage occurs.
  • Enhancing employee training: Cyber-security awareness programmes should evolve to include real-world simulations of deepfake scenarios. Educate employees on how to recognise and resist counterfeit reality techniques to bolster internal defences.
  • Revisiting security workflows: Adapt procedures and workflows to account for the new reality of AI-enhanced threats. For example, implementing secondary verification steps for high-stakes transactions or sensitive communications can provide an additional layer of protection.

 

AI: fight fire with fire 

While AI is undoubtedly a powerful weapon for cyber-criminals, it can serve as a critical tool for defenders.

 

Generative AI models can help organisations analyse large datasets to identify vulnerabilities and potential breaches more efficiently. Automated detection systems powered by AI can monitor network activity in real-time, flagging anomalies indicative of deepfake-based social engineering or unauthorised access attempts. 

 

Leaders can use AI to create deception technologies, such as honeypots or fake data assets, that mislead attackers and gather valuable insights into their methods. By using the technology to create realistic traps, SRM leaders can study the methods of attackers and strengthen their defences based on intelligence.

 

 

The deepfake detection gap

Despite the tremendous potential of AI for both attackers and defenders, one of the most pressing issues is detecting deepfakes in real-time. As deepfake technology becomes increasingly sophisticated, traditional detection systems struggle to keep up.

 

To address this, SRM leaders need to invest in advanced audio and video analysis technologies capable of identifying inconsistencies in deepfake content. These tools can flag discrepancies such as unnatural eye movements or mismatched audio cues—subtle details that human detection might miss.

 

Adapt or fall behind: the 2027 tipping point

AI-driven cyber-attacks are not a future concern—they are reshaping the security landscape today. By 2027, organisations that fail to adapt will find themselves overwhelmed by the speed and sophistication of AI-powered threats. The key to survival is proactive innovation, fostering a culture of vigilance, and embracing AI-driven defence strategies.

 

By integrating adaptability and resilience into their security frameworks, organisations can safeguard trust, ensure operational continuity, and stay ahead of the evolving cyber-threat landscape. 

 


 

Akif Khan is a VP Analyst at Gartner. Gartner analysts will present the current and future state of cyber-security at the Gartner Security & Risk Management Summit in London, from 22-24 September 2025

 

Main image courtesy of iStockPhoto.com and Boy Wirat

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543