It’s a good precedent. Nip this shit in the bud immediately. AI agents you allow to speak on behalf of you company, are agents of the company.
So if you want to put an AI up front representing your company, you need to be damn sure it knows how to walk the line.
When there’s a person, and employee involved, then the employee can be fired to symbolically put the blame on them. But the AI isn’t a person. It can’t take the blame for you.
This is a very nice counterbalancing force to slow the implementation of AI, and to incentivize its safety/reliability engineering. Therefore, I’m in favor of this ruling. AI chatbot promises you a free car, the company has to get you the car.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Just no.
If you can’t guarantee it’s accurate then don’t offer it.
I as a customer don’t want to have to deal with lying chatbots and then having to figure out whether it’s true or not.
Exactly. The goal of a customer service is to resolve issues. If communication isn’t precise and accurate, then nothing can be resolved.
Imagine this:
“Okay Mr Jones. I’ve filed the escalation as we’ve discussed and the reference number is 130912831”
“Okay, so are we done here?”
“You may end this conversation if you would like. Please keep in mind that 20% of everything I say is false”
“But we’re done right?”
“Yes”
“What was that confirmation number again?”
“783992831”
“That’s different than the one you gave me before before”
“Oh sorry my mistake the confirmation number is actually 130912831-783992831. Don’t forget the dash! Is there anything else I can help you with?”
Good! You wanna automate away a human task, sure! But if your automation screws up you don’t get to hide behind it. You still chose to use the automation in the first place.
Hell, I’ve heard ISPs here work around the rep on the phone overpromising by literally having the rep transfer to an automated system that reads the agreement and then has the customer agree to that with an explicit note saying that everything said before is irrelevant, then once done, transfer back to the rep.
That shouldn’t work. They should still be unconditionally liable for anything the rep said in all scenarios, with the sole exception being obvious sabotage like “we’ll give you a billion dollars to sign up” that the customer knows can’t be real.
Wow, wasn’t expecting such a feel-good AI story.
I wonder if I could fuck with my ISOs chatbot 🤔
It’s common courtesy to post the plain text of a paywalled article.
Copy pasting entire articles is discouraged. It is preferable to share a link to an archive website such as this: https://archive.is/5UPAI
They wanted human employees replaced by AI. But wanting responsibility and accountability replaced as well is going a bit too far. Companies should be forced to own up anything that their AI does as if it were an employee. That includes copyright infringement. And if the mistake is one worth firing an employee, then we should demand the management responsible for such mistakes be fired instead.