ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI can give you the edge – but can you trust it with your customer data?

Sponsored by LeadDesk

Artificial intelligence, particularly generative AI, has catapulted into prominence, with companies turning to it to help streamline and automate many of their business processes.

 

A notable example is Klarna AB, which has transformed many of its corporate functions, such as marketing, customer service and legal, with a broad introduction of genAI into just about every facet of its operations. Almost 90 per cent of Klarna employees use tools such as Klarna’s own AI assistant, Kiki, which is based on ChatGPT creator OpenAI’s large langauge models, to help solve everyday tasks at work.

 

As always, however, regulators are watching carefully, assessing the safety of such solutions. And with good reason.

 

The European Union is a world pioneer in data and privacy regulation. It began with the Data Protection Directive in 1995, before widening the scope of its rules with the much-publicised General Data Protection Regulations (GDPR) in 2016 and the NIS2 Directive in 2022. And this year’s EU AI Act and the upcoming ePrivacy Act in 2025 are set to bolster the union’s regulatory frameworks even further.

 

And outside the EU’s remit too, hefty penalties await companies that fail to take seriously their responsibility to safeguard their customer’s data, or that use that data in inappropriate ways. Take, for example, the case of ITutorGroup, which trained its AI recruitment tool to discriminate against candidates over a certain age – and later settled out of court to the tune of $365,000 after being collared by the US Equal Employment Opportunity Commision.

 

But less deliberate incidents have also had harsh consequences for companies. Air Canada  was ordered to honour a customer refund mistakenly promised by its chatbot. Samsung banned ChatGPT among its employees after an engineer uploaded internal source code to it, intellectual property which could end up on the servers of other firms operating the software. Amazon, JPMorgan Chase and many other US financial institutions have followed suit, issuing crackdowns on genAI tools that make it impossible for them to control the integrity of their internal data.

 

GenAI brings results

 

That said, it can’t be denied that generative AI brings results. Take Klarna’s investment into bolstering its customer service with AI, which automatically handled 66 per cent of the fintech’s customer service chats in the first month.

 

Contact centre software provider LeadDesk reports similar results in its AI chatbot case studies. One of its customers had 86 per cent of their customer service queries answered in chat, resulting in 22 per cent lower volumes in customer service queries for agents to handle across all channels. Another LeadDesk customer reported 65 per cent shorter queuing times before customers got to chat online to a human agent once they implemented the AI chatbot, from 104 seconds to 36 seconds average wait time.

 

But while such results might make clear the strengths of AI technology in customer service, how can contact centre managers both stay compliant with EU regulations and benefit from genAI?

 

Staying compliant with genAI

 

“Choosing the correct partner is the most crucial decision you can make when adopting AI in your contact centre,” explains Miikka Haavisto, Director of Business Development in LeadDesk’s AI unit.

 

For companies operating in the EU, data security and regulatory expertise is paramount, and a contact centre software provider with AI capabilities must show that it understands and follows both current and upcoming regulations. Looking for a provider with security certifications issued by external auditors should be standard practice for any decision-maker purchasing a customer communication tool.

 

ISAE3000 SOC 2 and ISO27001 are minimum standards to look for in any contact centre provider. When it comes to AI capabilities, asking how the AI is set up by the provider is also a crucial consideration. Providers who care about your data will not use it to train other language models, nor will they share it with other parties to train theirs. These are the kind of situations that would have concerned the privacy experts at Samsung, for example.

 

It is also essential to look for a partner who prioritises your objectives and emphasises the need for human oversight when implementing an AI contact centre project. “For our AI products, we always work with customers to define what they actually need from the AI, so they can build a solution that is checked by people, ensuring they keep control of their data and retain a high level of quality in their customer service operations,” says Haavisto. “You can only get the best out of AI with continuous monitoring and regular improvement.”

 

Compliant genAI with top results

 

LeadDesk’s chatbot case studies show that generative AI is a clear efficiency generator for customer service teams. However, regulatory frameworks and embarrassing oversights from lack of control threaten companies that don’t choose partners with a strong background in security and compliance.

 

By partnering with contact centre software providers with both SOC 2 and ISO 27001 certifications and strong expertise with AI in security and its regulatory frameworks, customer service managers can safely enjoy the efficiency AI will bring to their contact centre operations.


LeadDesk is a CCaaS provider that builds security and compliance-driven software that helps sales and customer service experts connect with their customers


Colm Ó Searcóid, CX Content Writer, LeadDesk

Sponsored by LeadDesk
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543