Drapeau News Logo
Search icon

CD Howe Warns Against a Canadian “Sora-Style” AI Innovation

The author, economist Anindya Sen, warns that without proper guardrails, AI innovation could harm both trust and reputation.
updated 3 months ago
Sora/Open AI - Photo: Getty Images
Sora/Open AI - Photo: Getty Images

A recent memo from the C.D. Howe Institute argues that Canada should avoid the pitfalls exemplified by OpenAI’s Sora.

What Went Wrong with Sora

Sora, a text-to-video AI from OpenAI, became controversial because it can generate hyper-realistic videos of real people. Critics say it enables deepfakes, misinformation, and potential misuse of likenesses — especially for dead or public figures.

OpenAI initially gave users only an opt-out, rather than opt-in, for using people’s faces — a weak form of consent. Under pressure, the company added stricter guardrails and limits, especially for celebrities and sensitive figures.

Why Canada Should Be Careful

Sen argues that Sora shows how innovation without regulation can backfire. Even if a tool has good uses — for creators, small organizations, or community groups — it still poses serious risks.

A better approach would require clear legal rules, not just voluntary safeguards.

In particular, Sen recommends that AI systems should only use a person’s likeness with explicit consent. That principle, he says, would reduce the potential for reputational harm and mass misuse.

The Role of AI Regulation in Canada

Sen notes that Canada’s AI Strategy Task Force is working on a national plan — but it doesn’t yet emphasize safety standards or a dedicated AI law.

The memo warns against under-regulation, arguing that hands-off approaches risk creating harmful AI systems.

Canada’s proposed Artificial Intelligence and Data Act (2022) did aim to tackle risk, but Sen believes it has gaps — especially around definitions of “harm” and accountability.

He argues that stronger, more specific legislation could prevent Sora-style risks while still allowing innovation.

A Vision for Responsible Innovation

To guide AI development, Sen calls for a framework that encourages both beneficial uses and responsibility. He urges the federal government to clearly define the types of AI innovation it supports and the kinds of harm it wants to avoid.

Without such a framework, Sen warns, Canada risks creating a “Canadian Sora” — a tool that undermines trust and damages its global AI reputation.

See more