ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Regulation and the Artificial Intelligence revolution

Mary Kernohan at SnapDragon Monitoring asks: should we say goodbye to unregulated AI?

 

Reading the news headlines today doesn’t paint a safe and secure picture of AI. 

 

Whether it’s our jobs, lives or online safety, the news is flooded with fear-inducing headlines reporting that AI is set to destroy humanity and evolve into an unwieldy online “Wild West” that will endanger businesses and consumers.

 

Fortunately, the most extreme predictions are farfetched, for now, but in amongst the hard-hitting headlines, an important question is surfacing: Is it time for some form of AI regulation to be introduced?

 

The AI-revolution

While scientists have been studying and researching various forms of AI since the mid-20th century, it’s only in the last few years that the AI arms race has begun to reach the wider public and businesses. 

 

Over the last decade, the world has seen rapid advancements in AI technology, allowing it to seep into almost every industry and home. But, despite these advancements, as AI grows in popularity within businesses and households, it’s also become an attractive target for cyber-criminals, who are fraudulently exploiting its capacity to launch cyber-attacks on a scale never seen before. 

 

The dangers of ChatGPT tools

Of all the highest profile AI tools to hit the market is ChatGPT. Capturing the world’s imagination, ChatGPT is a generative AI tool that has the internet at its fingertips, enthralled and appalled in equal measure. Ask it to “paint” like Van Gogh – it will. Ask it to write a poem better than Hemingway and it’ll give as good as it’s got. 

 

Used judiciously, and the tool is useful and intuitive. Used improperly, and it can become a criminal weapon of mass destruction. 

 

For example, take a look at some of the ways AI and ChatGPT can – and are - being used to hit businesses and consumers: 

  • AI tools may be used to create official looking communications from organisations to trick customers. In this case, criminals can ask ChatGPT to create an email letter (for example) in the style of a well-known bank or entity, with identical font, imagery, and tone – lulling recipients into clicking on a malicious link or revealing confidential information. The result? Potentially letter-perfect financial scams churned out in industrial quantities that can dupe many consumers. Far superior to the clichéd ‘Nigerian Prince’ scams of the past, AI generated schemes are highly accurate and believable - to even the most sceptical of internet users.
  • Another key concern with ChatGPT is its ability to replicate websites and profiles at a stroke, creating a mirror image of a genuine company’s online presence. For users visiting the fake version, from a supposed discount outlet to a company’s ‘official’ website, the chance of unwittingly carrying out transactions and giving away confidential financial information is similarly huge.
  • While individuals typically use ChatGPT innocuously, from asking silly prompts to helping facilitate repetitive tasks, the threat to intellectual property theft is all too real. While imitation may be the sincerest form of flattery, AI tools have been widely documented as infringing copyright content and creators in huge numbers. From artists to brand innovators, this scale of abuse left unchecked threatens to destroy fragile industries, undermine creativity and mislead consumers. 

These are just a few scenarios of where AI can be used maliciously. What’s more, combined with deepfake technology facilitating ever-more sophisticated fake news, personas and more, as AI innovates the threat to consumers, brands and society at large is immense.

 

So does this mean it’s time to regulate AI?

 

Regulating AI

While debate intensifies over the merits of AI worldwide, we believe that targeted and sensible regulation is an essential part of protecting organisations and individuals.

 

Why is this necessary? At present, AI innovation is a largely opaque venture taking place behind closed doors, with little to no oversight or regulation into these projects, their goals or their backing prior to going public. Although many of these innovations are being developed to benefit individuals and businesses, some are also being created with malicious intent.

 

What’s more, while we know trade secrets and discretion are integral to innovation and IP, artificial intelligence (as with all industries - online and offline) must be governed, ensuring organisations don’t, intentionally or unintentionally, unleash a product which could harm society.

 

This fine balance is something that governments across the world are also considering, with the UK Prime Minister, Rishi Sunak, recently announcing ambitions for the country to become an AI superpower. Sunak claims that AI can be governed through international agreements, where nations work to a standardised code of ethics.

 

While this would indeed help mitigate the risks AI can pose to citizens worldwide, sadly the global picture is not so simple.

 

From the Geneva Convention to climate change, one of the biggest challenges with this type of agreement is that not all countries will be part of - or abide by - its code of ethics. For instance, Russia and North Korea are well-known for carrying out malicious online attacks on the UK and US, and it’s highly unlikely these countries would sign such an agreement.

 

So what now? While we believe individual governments should take protective measures and work together, regardless of a global consensus, in the meantime the threat to brands and consumers remains. As a result, it is vital organisations take measures to protect their own assets, both before and after any regulation is introduced. 

 

Taking a proactive approach to identify AI threats is the number one priority. This means monitoring the web for misuse of an organisation’s products or intellectual property, but also monitoring for fake websites being set up in their names, or phishing emails being sent out to customers. Once identified, reporting, removal and where necessary, education is key – ensuring harmful content is removed and customers are protected.

 

Remember, by being aware of the types of AI threats that could impact businesses and consumers today, together we can help detect and mitigate them faster. What’s more, by understanding the problem first-hand businesses can remain one step ahead as the AI arms race continues to unfold.

 


 

Mary Kernohan is head of nurture at SnapDragon Monitoring

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543