ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The dangers of unchecked AI adoption

Simon Bain at OmniIndex warns of the potential dangers that will come from unchecked AI adoption and outlines four things the government and public sector need to know about AI ahead of the Autumn Budget

 

According to reports from Reuters, Labour’s autumn budget is set to prioritise public sector adoption of technologies like AI over direct investment into the industry. After scrapping the previous government’s £1.3 billion plan – that included a supercomputer at the University of Edinburgh worth £800 million – the Labour Party is planning to announce a new strategy targeted at delivering efficiencies and cost savings within the public sector.

 

Ahead of AI’s further integration into UK public services, there are multiple factors to consider carefully to ensure that its adoption is safe and effective. The government and public sector need to consider these before investing heavily in AI.

 

1. The problem with LLMs

The rise of ChatGPT has seen acronyms like LLM enter the vocabulary of the UK public. But to be frank, the novelty around large language models (LLM) has worn off and enthusiasm for them has significantly waned. The continued inaccuracies that they throw up, not limited to six-finger humans, have certainly eroded some of the public’s trust in their capabilities.

 

Instead, we’re seeing a rise in popularity for smaller language models that are bespoke, built on data sets that are carefully curated and do one job well, rather than lots of jobs in a mediocre fashion. For organisations and their teams, investing time and resources into personalised SLMs is far more likely to deliver valuable information that you can rely on to be correct and appropriate, and actually support employees in doing their job.

 

2. AI and sustainability rarely go hand in hand

It is well-reported that the extended use of some AI tools has extraordinary implications for the planet. ChatGPT’s daily power usage is nearly equal to 180,000 U.S. households, and a single conversation on ChatGPT uses around half a litre of water.

 

It’s vital that public sector organisations take great care in the choices they make around which technologies to use and invest in, as well as who to partner with. The best solutions or partners will address these concerns before you ask, proving that they have considered the impact of their products.

 

More usage means more impact, and we ought to reach a stage quite soon where AI can do what it’s best at, without setting the nearest forest alight.

 

3. Not all data is good data

The internet is full of an entire planet’s worth of information. Unfortunately, not all of it is true and hardly any of it is useful. For it to be fit for purpose, any AI chatbot, assistant or solution must be able to provide a level of transparency around the information that it returns to end-users so that it can be trusted to support them in their vital work.

 

Not only can some information be wildly inaccurate, but you need to be able to confidently state that it is free to use. Avoid breaking copyright law and facing expensive lawsuits by making sure you know exactly where something has come from before it is used.

 

4. Not all solutions are secure

Feeding swathes of personal and sensitive data into an AI solution should strike fear into the eyes of any cyber-security professional up and down the country. Should data not be adequately protected and fall into the wrong hands, organisations can face heavy fines for their failures. The government often relies on outdated security measures and protocols to protect our data, as do many of the solution vendors that have entered the market in recent years.

 

Any model that handles sensitive data should ensure that it remains protected and preferably encrypted. The latest technologies make it possible for analytics to be performed on encrypted data, meaning it never has to be seen or read by anyone at all.

 

For AI to be a viable addition to the public sector’s tech stack, we need to ensure that we can trust it with our data, and trust any of the partners it works with.

 

It’s not all bad news

There is some good news to impart to businesses looking to make the most of AI. There are solutions on the market built on good ideas, reliable data and secure technologies. There are also ways to ensure that your carbon footprint is minimised while receiving insights that aren’t built on erroneous data.

 

AI has enormous potential, provided it is accessed in a sustainable and sensible way. Small language models (SLM) built on curated data sets that do one job well rather than many jobs poorly are a good place to start. With the right technology in place, you can ensure that insights are drawn from trustworthy data that you can cite easily.

 

To add to this, the security benefits of web3 and blockchain technology can allow for sensitive data to remain encrypted at all times and ensure that it doesn’t fall into the hands of cyber-criminals.

 

When investing heavily into a new technology with such transformative capabilities, it’s vital that due diligence is done to ensure it doesn’t hinder your business more than it helps it. 

 


 

Simon Bain is CEO at OmniIndex

 

Main image courtesy of iStockPhoto.com and sankai

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings