Companies using AI chatbots to handle customer enquiries must honour the advice they give customers, a civil tribunal has ruled, after an Air Canada customer was given incorrect instructions by a chatbot that the airline tried to argue is “responsible for its own actions”.

After his grandmother died in November 2022, Vancouver resident Jake Moffatt booked an airfare to Toronto and was advised by the company’s website chatbot that he could apply within 90 days for retroactive reimbursement of the fare difference between normal and bereavement fares.

Airline policy, however, actually requires those receiving bereavement fares to apply for the lower rates before the booking was made – a point that Air Canada used in denying Moffatt’s subsequent application for a refund.

A series of emails ensued, and in early February an Air Canada representative conceded that the chatbot had provided “misleading words” and that, despite Moffatt’s correctly following its guidance, the airline would not pay the refund.

Yet Air Canada, the British Canada Civil Resolution Tribunal (CRT) ultimately held in awarding Moffatt $926 ($C812) in damages and costs, argued that the chatbot “is a separate legal entity that is responsible for its own actions” – a “remarkable” suggestion that, CRT found, had not been explained by Air Canada and made no logical sense.

“While a chatbot has an interactive component,” the tribunal found, “it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.”

“It makes no difference whether the information comes from a static page or chatbot,” the ruling notes, adding that the airline “did not take reasonable care to ensure its chatbot was accurate”.

Don’t blame the AI when things go wrong

Amidst surging corporate adoption of conversational chatbots – often powered by generative AI (genAI) engines that are well known to make mistakes, make things up and be subject to manipulation – the Air Canada decision is a warning for businesses that simply adding conversational capabilities doesn’t reduce their obligation to provide accurate information to customers.

With one recent GetApp survey of more than 500 project managers finding that 74 per cent of respondents believe AI will soon play “an essential role” in making critical business decisions – and Gartner predicting that more than 80 per cent of enterprises will have deployed genAI-enabled applications by 2026 – accuracy has never been more important.

Bad information from a genAI chatbot “will directly lead to the death of a customer” by 2027, Gartner has warned as the technology becomes an increasingly significant business risk that must be managed to ensure accuracy and consistency with business policies, and to prevent inadvertent breaches of privacy, consumer, and other laws.

Some 81 per cent of the executives in a recent Slack-YouGov survey of over 10,000 workers said they feel urgency to incorporate genAI into their organisations – yet addressing core data integrity and accuracy issues remains a critical back-end challenge, with just 59 per cent of Australian office workers – including 70 per cent of men and just 43 per cent of women – saying they trust the technology.

Respondents cite widespread concerns about visibility into how AI is used, transparency of underlying algorithms, industry standards, and human validation of AI’s outputs – not to mention the concerns identified in a recent Reveal survey of 585 software developers, in which 40.7 per cent said actually integrating AI into their software development process would be the biggest challenge in software development during 2024.