ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI: data security threat or the silver bullet?

Barry Cashman at Veritas Technologies argues that organisations that wish to use AI responsibly will need the right technological tools to keep data processing safe and compliant

 

In January, ChatGPT was estimated to have reached 100 million monthly active users, just two months after launch, making it the fastest-growing consumer application in history.

 

The tool is exploding in popularity and usefulness before our very eyes: general purpose large language models (LLMs) such as Bard, ChatGPT and Bing Chat are opening up many new opportunities for businesses that go far beyond the limited uses of traditional chatbots in creating more personalised customer experiences and internal helpdesk and troubleshooting services. 

 

The emergence of these tools has given rise to a newfound realisation that they can be used for a broad range of purposes, such as writing code, drawing insights from research text, or creating marketing materials such as website copy and product brochures. These services can also be accessed through APIs, which allow organisations to integrate the capabilities of publicly available LLMs into their own apps, products and in-house services tailored to their specific needs.

 

Adopting AI technologies can help organisations create efficiencies, gain a competitive edge, and reduce manual requirements, thereby increasing their revenue. Used effectively, they can also help elevate employee capabilities by offering access to resources that were previously unavailable, thus enhancing an individual’s knowledge base and skill set.

 

The sheer pace of progress in the AI space has spurred businesses to rush full steam ahead in implementing such technologies, so as not to fall behind their competitors. But this is also putting added pressure on IT decision makers to ensure these advancements fit into their existing data management strategy. 

 

As the implementation of AI in business processes becomes increasingly common, it brings a range of considerations over potential risks and blind spots that can arise.

 

AI’s risks and blind spots

 

Data privacy 

When integrating AI into business processes, organisations will typically gather data not only from online sources but also from their own data – potentially including sensitive company information and IP – to train the AI. But this creates significant security implications for organisations that become dependent on these AI-enabled processes without the proper framework in place to keep that information safe.

 

Organisations interacting with these services must ensure that any data used for AI purposes is subject to the same principles and safeguards around security, privacy, and governance as data used for other business purposes.

 

Many organisations are already alerted to the potential dangers. For instance, Amazon recently issued a warning to its employees about ChatGPT. Amazon employees were using ChatGPT to support engineering and research purposes. However, a corporate attorney at Amazon warned employees against it after seeing the AI mimic internal confidential Amazon data.

 

The developers of ChatGPT themselves warn against sharing sensitive information in conversations over the tool, stating that they are unable to delete specific prompts from your history. And Microsoft also states it “would not recommend feeding company confidential information into any consumer service”.

 

Data integrity 

Organisations must also consider how to ensure the integrity of any data processes that use AI and how to secure the data in the event of a data centre disruptions or a ransomware attack. They should consider the data they feed into the AI engine and its status, as not all information produced by AI is accurate.

 

Indeed, ChatGPT’s developers warn that “it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.” 

 

Moreover, businesses must ask themselves how they will protect the data produced by AI, ensuring that it complies with local legislation and regulations, and is not at risk of falling into the wrong hands.

 

Data ethics

Businesses using AI technologies must grapple with questions about transparency, accountability, and bias. If a chatbot is designed to interact with customers, it’s important to be transparent with users that they are interacting with an AI, ensuring they are aware of the system’s capabilities and limitations.

 

As AI responses are learned from diverse data sources, vigilance is required to prevent unintentional propagation of biases or controversial content.

 

Striking a balance between innovation and ethical responsibility is paramount to maintain trust and integrity, particularly when it comes to customer interactions. IT managers should scrutinise the chatbot’s algorithms and training data to minimise potential biases that could lead to unfair or discriminatory responses and share any ethical concerns and best practices with employees. 

 

Ransomware risks

A broader consideration is what developments in AI mean from a security perspective. The tools will be adopted not only for productive use cases but also by bad actors, who will look to apply the technology to increase the scale and sophistication of the cyberattacks they conduct.

 

It is imperative for individual organisations to recognise the potential harm that AI can cause to their operations and take the necessary steps to protect themselves from cyberattacks and data breaches.

 

Changing lives with AI

Considered use of emerging technologies like AI has the power change lives – it can transform consumer experiences, help governments make more informed decisions, accelerate scientific discovery, improve the delivery of more personalised healthcare services, and so much more. 

 

Yet, AI is advancing at a rate faster than many organisations can keep up with, and its applications will continue to be highly data intensive. Ensuring secure and complaint use of AI data and safeguarding against data breaches and cyber-security threats is no mean feat for those who don’t have the right technologies in place.

 

For the organisations that work alongside their IT and legal teams to create an AI strategy that is seamlessly embedded into their overall data management strategy, the opportunities are endless. 

 


 

Barry Cashman is Regional Vice President UKI at Veritas Technologies

 

Main image courtesy of iStockPhoto.com

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings