Automation Disaster

When the robots take over… and immediately break everything.

,

Air Canada’s Chatbot Gave a Grieving Man Wrong Advice. The Airline Said the Chatbot Wasn’t Their Problem. A Tribunal Disagreed.

๐Ÿšจ DISASTER LOG #004 | FEBRUARY 2024 | CATEGORY: CORPORATE SPIN + AI HALLUCINATIONS

In February 2024, a Canadian civil tribunal made legal history by ruling that an airline is, in fact, responsible for what its chatbot says. The ruling sounds so obvious that it’s almost embarrassing it needed to be stated. And yet here we are.

Jake Moffatt’s grandmother died in November 2023. Grieving and needing to travel urgently from Vancouver to Toronto, he consulted Air Canada’s virtual assistant about bereavement fares. The chatbot told him he could buy a regular ticket and apply for a bereavement discount within 90 days. He trusted the airline’s own AI. He bought two tickets totaling over CA$1,600. When he applied for the discount, Air Canada told him bereavement fares can’t be applied after purchase โ€” the chatbot was wrong.

Air Canada’s response was remarkable. The airline argued in tribunal that it could not be held responsible for what its chatbot said โ€” treating its AI assistant as a separate legal entity, an independent contractor of misinformation, conveniently beyond the reach of liability. Tribunal member Christopher Rivers was unimpressed.

Air Canada argued it is not responsible for information provided by its chatbot. [The tribunal] does not agree.

โ€” Tribunal member Christopher Rivers, in the most politely devastating ruling of 2024

THE ARGUMENT THAT THE CHATBOT IS SOMEHOW NOT AIR CANADA

Air Canada’s legal argument deserves a moment of careful examination, because it’s the kind of argument that either represents a profound misunderstanding of corporate liability, or a very deliberate test of how far “it was the AI’s fault” can get you in court. The position was essentially: yes, this is our website, our brand, and our chatbot โ€” but the chatbot is its own thing, legally speaking, and we can’t be held accountable for its statements.

The tribunal rejected this entirely. Air Canada, it ruled, had failed to take “reasonable care to ensure its chatbot was accurate.” The airline was ordered to pay Moffatt CA$812.02 โ€” including CA$650.88 in damages โ€” for the mistake its AI made while Moffatt was grieving his grandmother. It is difficult to think of a worse context in which to be defrauded by a chatbot.

๐Ÿ“‹ DISASTER DOSSIER

Date of Incident: November 2023 (chatbot advice); February 2024 (tribunal ruling)
Victim: Jake Moffatt, who was also grieving his grandmother
Tool Responsible: Air Canada’s virtual assistant chatbot
The Lie: That bereavement fares could be claimed post-purchase (they cannot)
Damage: CA$1,640.36 in wrongly purchased tickets
Air Canada’s Defence: “The chatbot is not us”
Tribunal’s Response: “Yes it is. Pay the man.”
Amount Ordered: CA$812.02 (including CA$650.88 in damages)
Precedent Set: Companies are responsible for their chatbots. Astounding.
Audacity Level: โœˆ๏ธโœˆ๏ธโœˆ๏ธโœˆ๏ธโœˆ๏ธ (Cruising altitude)

WHY THIS MATTERS BEYOND ONE CA$812 RULING

The Air Canada case established something that will ripple through corporate AI deployments for years: you own your chatbot’s outputs. This seems obvious. It wasn’t, apparently, to the legal team at Air Canada, and it almost certainly isn’t to every other company that’s deployed a customer-facing AI and quietly assumed that “AI error” was some kind of legal firewall.

The ruling also puts a name to the actual failure: Air Canada didn’t take “reasonable care” to ensure its chatbot was accurate. That’s a standard that, if applied consistently, should cause a great many customer service chatbots to be very quickly audited, retrained, or replaced with a phone number and a human being who knows the bereavement fare policy.

THE CHATBOT’S SIDE OF THE STORY

The chatbot, for its part, was simply trying to be helpful. It produced what it was trained to produce โ€” an approximation of helpfulness, assembled from patterns that may or may not have reflected the airline’s actual bereavement fare policies at any given time. The chatbot did not know it was wrong. It didn’t know anything. That’s rather the point.

Deploying a confidently-wrong AI assistant on a customer service portal and then arguing the company isn’t responsible for the confidence is, ultimately, a choice. Air Canada made it. The tribunal disagreed. Jake Moffatt, still grieving, received CA$812.02 and the quiet satisfaction of a landmark legal precedent.


Sources: British Columbia Civil Resolution Tribunal (February 2024), reporting by multiple outlets. Air Canada has since updated its bereavement fare policies. The chatbot, we are told, has also been updated. It declined to comment.