While at university, I interviewed for an internship at a finance company in New York. I had taken some economics courses and got good marks, and the interview process had, so far, gone well.
In my final interview, the “Big Boss” asked me a few questions about a company I ought to know about. His questions quickly exceeded my comfort zone. I didn’t want to say, “I don’t know,” so I started to make things up. I thought I did well, and I was proud of myself for having spoken intelligently about something I knew nothing about.
The Big Boss looked at me and said, “You’re the most dangerous kind of employee. You sound pretty good, but I know you’re full of it.” He’d seen right through me, but his concern was that others might not. They might believe I knew what I was talking about and let me make decisions I wasn’t well informed enough to make.
As a student, I was used to answering questions to display aptitude. But in a professional setting, what you say has consequences. As I got more adept at working with teams, I’ve become more comfortable saying, “I don’t know,” and adding context around what I do know, which may help my colleagues take a step in the right direction.
When I interview candidates, I sometimes ask questions they couldn’t possibly know the answers to, just to see how they react. I’ve come to associate the willingness to say, “I don’t know,” with a set of traits I value in my team – self-awareness, humility and a willingness to listen.
Last November, I interviewed the most dangerous candidate I have ever encountered. Very smart, super confident, often wrong, and seemingly incapable of saying, “I don’t know.”
That employee is generative AI.
Generative AI interviews well. Tell it to write a story about a family of foxes in the style of Edgar Allan Poe, and you get just that. But it also gets basic things wrong and delivers the wrong answer with all the confidence of someone who knows what they’re talking about.
Generative AI and customer service
At PolyAI, we make voice assistants for customer service and guest care.
We recently tried using a generative AI model to answer questions about one of the hotels we work with. The result sounded great. It could answer all sorts of niche questions about the hotel car park, such as the height clearance and hourly rates. But there was a problem. The hotel in question didn’t have a car park.
When we see a smart person write a story about foxes in the style of Edgar Allan Poe, most of us will assume that person is also capable of solving simple tasks correctly, consistently and predictably. But that’s not how large language models work. They don’t know right from wrong – only what is more similar and less similar to the training data.
Generative AI may be a “dangerous” employee, but it’s one we can’t afford to not hire. In customer service use cases, generative AI will enable companies to answer more questions at a more granular, personalised level and provide opportunities for training and improvement that will allow us to move past the “I’m sorry, I didn’t get that” era of voice technologies.
But, as with all of our employees, we must ensure that generative AI has the right teammates, remit and training to achieve the best possible results.
Put generative AI on the right team
Over the past year, we’ve seen a flurry of applications of generative AI that claim to solve all sorts of business problems. A year on, it’s becoming clear that this technology is not solely capable of providing reliable solutions to real-world problems. This technology works best with other, more predictable “colleagues” or technologies.
Generative AI models are great for generating responses to less frequently asked FAQs, but they require the support of speech recognition and spoken language understanding technologies to work in a voice context.
It takes a village of technologies to produce great customer experience. Generative AI will be an important part of the conversational AI technology stack, but it’s just one part.
Give generative AI the right job
Generative AI excels at answering simple FAQs and will enable enterprises to summarise insights from customer conversations easily. But it has weaknesses. In more procedural customer service transactions, such as booking a hotel room or a doctor’s appointment, intent-based models are still more accurate, faster and cheaper to deploy.
For example, at PolyAI, we’ve developed an orchestration engine that determines whether an intent or generative model should handle each part of the conversation. This ensures we give generative AI the jobs it’s good at, not the ones it’s not.
Give generative AI the right training
Tools such as ChatGPT are great at talking, but they’re not trained to take action.
Consider a customer service use case. It’s not enough to tell a customer you’ve cancelled their order; you have to actually do it.
LLMs can answer questions but don’t know when to send an API call, send an SMS, make a record change in the CRM, hand off to an agent or do any of the actions required to solve a custom problem. Additional training is necessary to enable LLMs to work with other technologies to take action.
Treating AI as an employee like any other
I’ve always found it interesting that people working with AI eventually humanise software (our clients always give their voice assistant a human name). I think it’s partly because AI software behaves non-deterministically – it’s often unexpectedly smart but also unexpectedly dumb.
As companies hone their AI strategies, it can be helpful to conceive of the AI as having strengths and weaknesses just as an unproven employee might and to frame your AI strategy as something that would resemble talent development – giving it the right teammates, assignments and training to be successful.
To hear more about how leaders in retail, logistics, hospitality, and other industries are utilizing AI in their business, check out the PolyAI Vox talks or reach out to one of our Voice AI experts here.
By Yan Zhang, COO, PolyAI
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543