Legal TechThis Month Articles

The legal risks of AI speaking for your business : What African Companies Can Learn from Air Canada’s Landmark Case

Bryan Miller - Legal Africa

As Africa’s digital economy expands, more businesses are turning to artificial intelligence (AI) to enhance customer engagement, automate processes, and boost efficiency. But what happens when AI speaks on behalf of your business and gets it wrong? Can your company be held legally responsible for what a chatbot says?

According to Brendan Bernicker, cofounder of Bernicker Law PLLC, a boutique law firm advising technology startups and software developers in the U.S. and U.K., these are not far-fetched questions. Speaking during the ABA-sponsored webinar titled “We Said What?! Companies’ Liability for Statements by (and to) Their AI Customer Service Agents,” Bernicker urged business owners to think carefully about the legal consequences of “allowing AI to interact with customers.”

“These are things, among others, that business owners should be thinking about or be a little worried about—because they could potentially create legal liability for the company,” Bernicker said.

A Case that Changed the Conversation

Bernicker referenced the 2024 case of Moffat v. Air Canada, decided by the British Columbia Civil Resolution Tribunal, which found Air Canada liable for the negligent misrepresentation of its AI chatbot.

The case involved a customer, Moffat, whose grandmother had passed away. He asked the airline’s chatbot about bereavement policies and was told he could book his flight, request a refund within 90 days, and receive a bereavement discount afterward. Relying on that information, he booked the flight  but his refund request was later denied because the company’s actual policy required pre-approval.

When Moffat sued, the Tribunal ruled that Air Canada was liable for the misinformation provided by its chatbot. The decision marked one of the first times a business was held legally responsible for the statements made by an AI agent.

According to Bernicker, “Air Canada could have set the AI model up differently so that it didn’t provide information that it wasn’t prepared to honor. It could have simply linked to the policy without adding commentary.”

This case, he said, “teed up the first real decision on the question about when businesses are liable for statements by their AI agents.”


The Broader Lesson for Businesses

For Bernicker, who also teaches a course on AI law at Penn State University, this case highlights an important point: existing laws still apply, even to new technologies. “Most of what I cover in class and advise clients on,” he explained, “is how existing law applies to new technologies. This is not an area where there’s going to be lots of new rules for liability for chatbots. Traditional doctrines like apparent authority serve pretty well.”

He added that while AI tools are reshaping how businesses communicate, they also introduce new risks in how notices, copyright claims, or consumer complaints are received and processed. Often, the key legal question becomes: What did the business know, and when did they know it?

Bernicker’s advice is straightforward: businesses should limit the apparent authority of AI tools by using prominent, clear, and tailored disclaimers, and by keeping detailed records of all customer interactions. “Having effective disclaimers and maintaining records of customer interactions are the number one and number two best ways to limit liability,” he emphasized.

Why Africa Must Pay Attention

For Africa, where AI is fast becoming a vital tool in sectors such as fintech, telecom, transport, and e-commerce, these lessons are not just theoretical they are timely warnings. Across the continent, governments are still crafting national AI policies, and few have established clear frameworks for AI liability or consumer protection in digital spaces.

Yet, the implications are real. If an AI-powered banking bot in Nigeria misleads a customer, or an e-commerce chatbot in Kenya provides inaccurate information about refunds or deliveries, could those companies face similar legal consequences as Air Canada?

As Bernicker noted, “AI is inherently international.” The technologies that African startups deploy are often built abroad, and their operations transcend borders. Ignoring how global courts interpret AI liability could leave African businesses and consumers exposed to risks that are already being tested elsewhere.

The Way Forward

Africa’s future with AI must be guided not only by innovation but also by responsible governance. Legal systems across the continent must start adapting existing consumer protection, contract, and data privacy laws to address the realities of AI-driven commerce.

For businesses, the message is clear: AI can enhance operations, but it cannot replace human oversight or legal accountability. Disclaimers, transparency, and ethical AI design must become standard practice.

Join the Conversation

How ready is Africa’s legal system for the age of AI-driven customer service?
Share your thoughts with Legal Africa across our platforms or join our next policy dialogue on AI and the Law in Africa.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button