Fabien Rech at Trellix explains how organisations can navigate the wild west of AI safety and robust data privacy
The AI Safety Summit set a precedent for organisations acknowledging the significance of the opportunities posed by the technology. This is especially true when it comes to ensuring security and data privacy are top of mind when it comes to setting priorities.
It’s now more important than ever for senior business leaders to reap the benefits of these tools to streamline their security infrastructure. In digitalising their systems, C-level executives can react quicker to emerging security situations, and even proactively resolve them.
However, these tools are a double-edged sword, with benefits also being available to threat groups as they seek to upgrade their attack methodology. With the ease of access to open-source AI tools and their integration into existing IT and OT systems, threat actors can look to exploit business enablers like chatbots and media generators for malicious intent.
The potential impact from data theft attacks can vary, but our research found that the business impact can be detrimental. When surveyed, 44% of global CISOs reported that the data breach resulted in negative public exposure, revenue loss (43%), and business downtime (41%).
Additionally, we’ve seen a rise in the sophistication of social engineering initiatives using deepfake technology as well as enhanced phishing and vishing scams. With an aim to trick employees, hackers are finding it easier to bypass security controls, and compromise swathes of confidential and sensitive business data.
This is why, businesses need to be more aware of the risks posed by AI powered attacks, identify key trends, and integrate digitally empowered tools to keep pace.
The efficiency of AI-powered attacks
AI is increasingly being leveraged within threat tools and programmes to enhance the efficacy of attacks, resulting in data breaches becoming more successful, and impactful. Keeping up with how cyber-criminals use large language models (LLMs) and generative AI tools like ChatGPT is essential for security teams to build a robust defence against them.
These AI tools allow threat actors diversify their approach for more sophisticated and targeted attacks. The use of these LLM tools has proven invaluable in enhancing the process of gathering data and information to conduct spear phishing campaigns, thereby easing the methods available to generate targeted emails at scale, at little to no marginal cost.
This has led to a resurgence of “Script Kiddies”, as impressionable, young, and generally low skill individuals looking to enter the cyber-criminal world. With generative AI, the barrier to entry is lowered and so these inexperienced actors can bypass the need to develop heavily curated and bespoke tools – streamlining social engineering strategies when stealing data.
As AI-enhanced cyber-attacks increase, the imperative for business leaders is clear. It’s no longer enough to simply react to AI powered attacks, they need to get on the front foot. This involves being informed of the types of AI tools and their impact on the wider security ecosystem. Support must be given to ensure employees are trained, and security teams have the right infrastructure in place to mitigate these attacks.
Advanced social engineering strategies
Phishing attacks aren’t new territory for cyber-criminals, in fact research found that it accounted for 90% of breaches in 2023.
However, there have been developments in how threat actors use new, enhanced techniques to exploit individuals through targeted spear phishing and vishing initiatives – notably deepfake technology. Whilst quite a polarising topic, there have been ground-breaking applications within the movie and media industry, creating of near realistic likenesses of actors and actresses. Bridging the uncanny valley to navigate inconsistencies with continuity.
However, we’ve also witnessed far more insidious uses of the technology. Specifically, the use of AI-powered deepfakes to mimic influential celebrities and human speech. In setting up elaborate scams threat actors can hoax impressionable individuals and bypass detection systems. This can be particularly challenging for global organisations, as threat actors simulate other languages for a far wider reach.
With social engineering being the drivers for just under half (45%) of breaches occurring, these individuals increasingly being convinced to leak credentials and expose mission critical data. Customers of victim organisations are not in the clear, as we recently observed scammers exploiting the situation when Wilko entered administration. Threat actors wasted no time in leveraging customer data and set up fake websites to extort them as well.
Building a more resilient security environment
Ultimately, organisations need to adapt dynamically to the sophistication and opportunities posed by these innovative technologies and seek to meet them head on. Keeping step with threat actors, whilst an ever-present challenge, can be simplified by just using the right technology.
By empowering security teams with AI-supported tools and placing them directly in the hands of employees, vulnerabilities can be nipped in the bud.
Board and C-level decision makers need to analyse the full threat surfaces and identify the core vulnerabilities. Here is where threat appetite is important, as measuring acceptable risk is vital when looking at business units from a siloed perspective.
However, that doesn’t necessarily have to be the case, as extended detection and response (XDR) tools provide robust blanket security that integrates smart technology and machine learning. This protects data, network, and endpoint together, with automated detection and response pathways, which learn from previous attacks, weaving findings into ongoing protection.
In fact, our research found that 1 in 5 (19%) global CISOs that had integrated XDR prior to an attack had a deep understanding of the threat landscape and intelligence. Using the right solutions to adapt swiftly to emerging threats and attack vectors meant that they didn’t need to compromise and forego acceptable risks.
Diversifying security to mitigate data breaches
Overall, business leaders need to be aware of the proliferation of LLM, generative AI, and deepfake tools that ease the pathways for threat actors to trick, scam and infiltrate core business units. They continuously seek to navigate and bypass security controls to gain access to sensitive and confidential data.
This is especially important in today’s diverse security ecosystem with new technologies shifting the most effective tactics and the approach needed to defend business assets. Furthermore, the tools available to threat actors are becoming increasingly challenging to defend against. Security teams need to have greater visibility over their threat surface and be empowered with top-down support from business leaders.
It’s important for organisations to integrate the right, smart, and adaptable technology, as innovation enables security teams to stay ahead of the curve.
By training teams across business units, decision makers can ensure all employees, across the organisation are aware of risks. This helps prevent serious data breaches from occurring and drives ongoing operations with minimal downtime.
Fabien Rech is SVP & GM EMEA at Trellix
Main image courtesy of iStockPhoto.com
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543