AICustomerService
ThoughtStorms Wiki
I was explaining to someone the other day why AI won't make customer service better.
The problem with customer service is that customers, by definition, WANT something non-standard from the organisation. And the organisation typically doesn't want to give it.
It doesn't matter whether what's on the end of the telephone or chat window is a human or an AI. It doesn't matter how clever they are. Or how slick and easy to understand the documentation is. What determines the customer service experience is whether the organisation is willing to put itself out and give the customer the thing it otherwise wouldn't.
The frustration for customers is that the organisation either refuses point blank. Or (more typically) doesn't have any mechanism set up to make a decision about it. And the more friendly and caring the person on the phone sounds. The more feelgood words written on the site. The more sophisticated the language model that drives the chatbot. All of this contributes to the frustration. "If these people are as nice as they sound, why the f*** don't they just reimburse me the £300 quid they owe me?"
Smarter, more "human-like", AI can't make that situation and frustration, any better. It can only make it worse.
UNLESS, the organisation is willing to empower the AI to make such decisions and start making payments to people who ask for them.
And that is NOT going to happen.
AIs are demonstrably gullible to all kinds of jailbreaking techniques and manipulations through words. Researchers have shown that hidden comments in webpages and zero-sized text in emails can manipulate an LLM that reads it. (Imagine if neuro-linguistic programming were real. It is for models made of nothing but language).
If you made an LLM agent that could simultaneously make payments and be persuaded by a sob-story, then it would immediately haemorrhage money from the organisation.
And systems within companies are intended to block leaks, not enable them. There's a reason there's no way to get the decision made on your problem. It's not a bug, it's a feature. If one existed, then humans in customer service could trigger it more frequently, and more payouts would end up getting made.
No company wants that.
So AI will either be adopted as one more facade, to make the customer experience more frustrating in the gap between promise and reality. Or will be a technology literally designed to increase costs to the company, and so won't get deployed.
Anyway, having made this argument, I decided to step back and consider it before writing it up. What am I missing? What other scenarios are possible? The times when solving the customers' problem is such a trivial cost that the company doesn't mind paying it? In which case, why is it still a "problem" in the first place? Why haven't the systems in the company been designed to avoid these issue in the first place?
See also :
Backlinks (2 items)