ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI inclusivity

Andy Syrewicze at Hornetsecurity shares his insights on rising concerns around artificial intelligence and how AI will impact businesses’ cyber-security

 

The 2025 Artificial Intelligence Action Summit saw tech gurus, business leaders, and political figures meet in Paris last month to discuss AI’s opportunities for broader public interest, innovation, and culture. 

 

The common goal was to increase AI’s inclusivity, providing technical solutions that are transparent and accessible to all, but also to explore the new opportunities and challenges, particularly when it comes to safeguarding information and combating its manipulation. 

 

It’s heartening to see that there was an emphasis on strengthening businesses’ cyber-defences against malicious attacks. Significant progress is expected to be made in upgrading cyber-security tools and raising common awareness of serious data and information loss that AI could precipitate. To achieve this, robust privacy protections should be implemented on a global level to guard against high-potential breaches and identity theft.

 

 

AI’s inclusivity impact on businesses

It’s safe to assume that there will be an increase in AI-generated attacks. AI has already transformed cyber-security, and this process will not slow down. Those attacks will become hyper-sophisticated, as AI is used to craft more complex and evasive threat vectors. This is compounded by AI’s growing autonomy, which could eventually enable AI to compromise systems and data on its own in the future if given that mandate by those using it.

 

The impact of AI on accelerating cyber-security transformation will be further strengthened by its growing inclusivity. The gradual achievement of inclusive AI will empower non-experts to gain a greater understanding and appreciation for the importance of cyber-security.

 

By acquiring expertise through frequent, self-directed practices of AI systems, they are likely to be able to address cyber-security issues with greater confidence, such as recognising suspicious emails or questionable MFA notifications.

 

AI systems will also evolve from assisting cyber-security analysts to orchestrating incident response scenarios, making threat mitigation faster and more autonomous as well.

 

 

Identity theft

The rise of digital identity theft and deepfakes, which blur the line between real and fake online personas, will be seen as one of the most pressing cyber-security challenges faced by businesses and individuals. AI’s ability to generate convincing fake identities means the risk of digital identity theft will escalate.

 

If no technical and legal precautions are made to tackle this now, it’s entirely possible that in the next two decades we may not be able to distinguish real people from AI-generated personas and we would have no recourse to justice in case of failure. This is, in fact, already beginning to happen, but business owners should expect it to get worse in the future.

 

This growing problem will present significant challenges for trust and security, increasing crime and disturbance towards the general order of how people conduct business activities. Deepfakes and phishing attacks that we see today will be the tip of the iceberg when it comes to theft and scams, leading to confidential data and information loss, as well as identity theft that affects individuals and businesses.

 

People will question how they can protect not just their data but also their identity, with a potential eventual loss of trust in AI and the technologies it powers. 

 

It’s also worth reflecting on the rise of autonomous systems powered by AI, including self-driving cars and medical implants. These systems are connected, in some cases, for smooth delivery of services, making them vulnerable to attacks that could compromise their functionality or endanger human lives.

 

As AI could, one day, have a continuous effect on these systems, we may see a new breed of cyber-security services emerge to protect them. It’s possible that AI, with more professional human interference, will also be the solution to these AI-generated threats. 

 

Higher levels of safety surveillance will be needed among professional services such as banks, hospitals and retail service providers, not to mention critical infrastructure services. Those consequences could ultimately undermine the inclusive AI the global community is working so hard to establish.

 

 

Security training

There’s a new threat, too. As wearables, such as smartwatches, become more sophisticated, they could be targeted by cyber-criminals as a channel for seeking personal data.

 

The next generation of devices, such as Meta’s AR glasses, already present new privacy risks, and this concern is set to grow. These devices, powered by AI, can access real-time information about surrounding people and locations and track interactions. In a future where attackers could access what people see and experience, the potential for privacy breaches will go beyond cyber-crime. 

 

To tackle these concerns, security awareness training across all sections of society should be implemented as soon as possible. In addition to traditional ways such as lectures, ongoing AI-powered modules and simulations can be another option to help individuals recognise and respond to these risks.

 

Over time, the formats used to train will need to shift to effectively engage future generations. Just like learning to drive, security training will need to recalibrate expectations and ensure individuals are aware of critical cyber-security threats.

 

As we move towards a ‘genAI everywhere’ society, it’s likely that a new industry will emerge to secure AI models, prompts, responses, and their usage. Attacks like prompt injections and model poisoning are already on the rise.

 

With the popularity of inclusive AI, the demand for new security tools and services to ensure its reliability and trustworthiness will only increase. Simultaneously, regulatory frameworks and parameters of operation will develop and evolve as part of this.

 

 

What’s next

Without clear ethical guidelines, AI could cause significant security challenges that threaten data, information and identities.

 

Business will be profoundly affected as these issues may result in questionable, unexpected, and potentially uncontrollable outcomes that break people’s trust — something that contradicts the goals outlined at the AI Action Summit and the expectations expressed there.

 

To save all the benefits gained from years of AI development, we may see the emergence of a new industry that secures AI models, prompts, responses and their usage. With the rapid growth of inclusive AI, the demand for new security tools and services to ensure its reliability and trustworthiness will also increase significantly.

 

Already today, next-gen cyber-security developers incorporate AI and LLM in their solutions to combat current and evolving threats, be they AI-powered or not. Vendors will also aim to invest in continuous research and development to keep ahead of the curve. This is set to continue as a means of robust protection coupled with ongoing cyber-awareness training on the fly.  

 


 

Andy Syrewicze is a Microsoft Security MVP and Security Evangelist for Hornetsecurity. To learn more about Hornetsecurity’s views of AI’s impact on cyber-security and businesses, visit hornetsecurity.com

 

Main image courtesy of iStockPhoto.com and koto_feja

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543