ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

ChatGPT: the ultimate spreader of fake news?

Jason Gerrard at Commvault explains that ChatGPT doesn’t work like a normal search engine and that users need to treat its outputs with caution

 

ChatGPT has taken the world by storm. Unless you’ve been living under a rock, you will know what it is already – an artificial intelligence (AI) chatbot that can answer questions or complete whatever task you set it (not including manual, real-life tasks).

 

The benefits are being widely spoken about: saving time and resources, cutting costs, and creating innovative content, to name just a few. 

 

Yet it is becoming an increasingly apparent problem that ChatGPT is not always 100% accurate. If asked something it doesn’t know, the chatbot will confidently give you an answer even if it is wrong, and, not knowing differently (after all, there was a reason you asked it in the first place!), most consumers are likely to believe it.

 

In fact, ChatGPT will even argue against users who suggest it is wrong and explain why the answer it gave is correct. 

 

This occurs because ChatGPT is not like a search engine. Search engines collect information from databases, rank the most popular findings, and show the user these results for them to find the answer that they are looking for. ChatGPT, similarly, finds the information in its database, but rather than relaying it to the user exactly how it is found, it creates the answer by making a series of guesses based on the results.

 

That is where it has potential to go wrong and fake news starts to be generated.

 

But it is not just the creation of answers by ChatGPT that causes the risk of generating fake news. As an artificial intelligence chatbot, ChatGPT does not have the ability to reason, like humans do. It cannot identify an opinion piece from a factual news article so takes everything at face value.

 

With fake news estimated to make up 62% of all internet information, the likelihood of ChatGPT finding false information in its discovery phase and relaying it to you as a definitive answer is high.

 

Furthermore, with its record-breaking fast growing user base, it is likely that users will be asking ChatGPT similar questions and getting similar, if not exactly the same, answers. As a result, multiple duplicates of fake information could end up online. Having been repeated and stated so many times, it will be cemented as fact.

 

A risky business

ChatGPT can be useful across a range of business departments – from marketing to product development. In creating the latest marketing campaign or developing the latest product, many employees will turn to Google for inspiration, perhaps asking the latest pain points that the industry is experiencing so that they can dedicate their work to solving the issue. With ChatGPT accessible to everyone, it is quickly becoming a useful tool in the idea generation stage of new projects.

 

Yet, there is a huge risk of ChatGPT providing an incorrect answer to a question that teams then build a whole product or campaign around. For example, ChatGPT could suggest that an industry problem from five years ago was one of the sector’s current pain points. Taking this as fact, a marketing team could build a whole campaign around this idea, resulting in huge reputational damage in which the business is considered to be out of touch and obsolete.

 

Even worse still, if a business was to build a whole product around this supposed pain point, millions of pounds could be invested into a solution that is not needed by their customers. This would not only have a huge reputational impact but also result in a significant financial loss.    

 

Keep the humans! 

Mistakes like the above can be avoided if a bit of expertise is applied alongside the technology. Humans shouldn’t be taken out of the equation just yet and any information that ChatGPT gives you should be checked – and then double checked – by experts in the field.

 

Who knows if one day our jobs will be completely replaced by robots but, at least for now, human expertise is still essential in business. Think of ChatGPT more as an ‘AI coworker’ who helps you out with certain tasks and comes up with ideas but, at the end of the day, you still have to do your part of the job.

 

This isn’t to say that ChatGPT and AI, in general, aren’t valuable. They can save hours of employees’ time and come up with some great insights and ideas.

 

ChatGPT is still in its early stages, and new updates are being brought out regularly. It is only a matter of time until it is a reliable source but, until then, businesses must proceed with caution. 

 


 

Jason Gerrard, Senior Director of International Systems Engineering at Commvault

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543