ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

How AI can let the hackers in

Danny Lopez at Glasswall explains how artificial intelligence is creating genuine cyber-security risks

 

The European Parliament recently passed the world’s first comprehensive AI law, significantly raising the stakes for the developers and users of AI technologies targeted by cyber-criminals. Its arrival is also indicative of an increasingly complex relationship between AI and cyber-security, where tech vendors and threat actors are involved in a technology arms race.

 

The fundamental issue is that AI has brought with it a whole new layer of cyber-security opportunities and risks. Not only are organisations faced with the increasing likelihood that threat actors will use AI to deliver more successful attacks, but the developers of AI systems are themselves becoming targets.

 

In this context, cyber-criminals view AI technologies as another route to deliver malware or gain access to networks. They also recognise the potential for causing significant disruption as more public and private sector organisations integrate AI into their existing infrastructure.

 

The new EU legislation explicitly recognises this risk. For instance, Article 15 states that “high-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities.”

 

Failure to comply with these rules could result in some heavy fines of up to “€35 million or 7% of global annual turnover for banned AI applications, €15 million or 3% for violations of obligations under the AI Act, and €7.5 million or 1.5% for providing incorrect information.”

 

Given the EU’s track record in pursuing GDPR compliance and imposing fines, it’s likely we’ll see high-profile cases emerge as breaches of the EU Act come to light. In its six years of operation, for example, over €4.5 billion of GDPR fines have been imposed by national regulators and governments across more than 2,000 cases.

 

Some argue that this total should have been much higher, with critics of GDPR enforcement complaining “that the system to handle international complaints is bloated and slows down enforcement.” Whatever the perspective, breaches and fines are inevitable.

 

Assessing the vulnerabilities

Looking at the risks a little more closely, cyber-criminals are already using AI technologies to mount more sophisticated attacks. This includes implementing more advanced and convincing phishing emails and messages, where the analysis of existing communication patterns enables threat actors to mimic trusted sources or contacts.

 

The worry for security teams is that, in general, people are already a major area of risk in the security chain, and as these strategies become even more convincing, the number of serious breaches will rise.

 

But that’s just the start. Advanced, AI-powered cyber-crime technologies can also be used for a variety of other purposes, from finding backdoor access into networks and exploiting zero-day vulnerabilities to massively accelerating the process of uncovering user passwords.

 

Collectively, these AI-powered intelligence and automation technologies have the potential to industrialise cyber-crime on a level not seen before.

 

Another growing area of concern is the use of AI for content creation, where the technology is already widely used to create fake news articles, social media content and websites. With 2024 due to be the biggest election year on record, with around half the world’s population going to the polls, there is huge scope for AI-powered misinformation to influence political discourse and disrupt election processes.

 

AI tools are also under attack. Threat actors and nation-state adversaries see the technologies as an area of vulnerability, with their objectives varying from the theft of code and IP to disrupting AI outputs so that users are fed inaccurate or misleading results to their queries.

 

In addition, logins for systems such as ChatGPT are being sold on the dark web, where they can be used to steal sensitive data and login credentials from unprotected devices.

 

Good vs bad

So, what is being done to mitigate these risks? Regulation has an important role to play, and the EU AI Act is a starting point for ensuring organisations protect systems and data. More legislation is a certainty, and governments must continue to work together to ensure threat actors are identified, and that prevention and mitigation technologies remain ahead of the risks.

 

While it’s tempting to think that the answer to the risks caused by ‘bad’ AI is to balance them out with ‘good’ AI, this shouldn’t be pursued at the expense of human oversight. Effective cyber-security strategies will remain firmly dependent on combining technology tools with human experience, expertise and ingenuity, from regulation to employee training and everything in between.

 

Ultimately, organisations that implement AI technologies without the proper protection measures will be in the worst possible position: they will be at significantly increased risk of a security breach and subject to expensive enforcement action.

 

For some, the knock-on effect will be that instead of focusing on how AI can help them succeed, they will be forced to make sure it doesn’t cause them to fail.

 


 

Danny Lopez is CEO of Glasswall

 

Main image courtesy of iStockPhoto.com and wildpixel

 

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings