The CEO’s Guide to AI Governance: Why We Can’t Wait for Ottawa

Text reads: Ethical AI is a leadership choice. 5 questions every creative leader needs to ask their team this week.

Ethical AI isn't a software feature you buy, it's a workflow you enforce. Sharing 5 questions creative leaders need to ask their team.

I spent this past week in Ottawa at Prime Time, the annual gathering of Canada’s media production industry. The optimism in the halls was palpable, but inside the session rooms, we tackled some uncomfortable truths.

Apart from Mark Karney’s positive energy with the Heated Rivalry team and improvised speech, the session that has stayed with me most was "Ethical AI: What It Means in Practice."

The room was filled with producers, funders, and creators, all asking variations of the same questions: When will the rules be clear? Can AI be used ethically? How will government, unions, and guilds intervene? 

The panel featured entertainment lawyer Stephen Stohn, Tejas Shah (Gennie), and Marie-Julie Desrochers (CDCE) and was moderated by Kelly Wilhelm. They delivered a wake-up call that every CEO needs to hear:

The cavalry isn't coming. We are the cavalry.

Stephen Stohn put it best, quoting Mark Carney: "We must face the world as it is, not as we wish it to be."

The reality is that Canadian regulation is moving too slowly to keep up with the speed of generative AI. The AI tools that we call on every day (ChatGPT, Gemini, Claude, Perplexity, and others) were built by ingesting vast amounts of information from across the internet. That data is already collected, processed, and embedded in these systems. That reality can’t be undone. Waiting for the perfect AI Act before establishing our own internal guardrails ignores this reality and exposes both individuals and organizations to unnecessary risk.

At Innovate By Day, we believe that ethics isn't a software feature you buy. It is a workflow you enforce. If you are a studio head or marketing leader trying to navigate this "Wild West," here is a 5-step governance checklist, based on the hard truths from the session:

Here is what the report signals for our industry partners and what it means for the future of how content gets made, positioned, and discovered.

 

1. The "Copyright Void" Audit

The Reality: If an AI generates your script, character design, game mechanics, or soundtrack, your legal claim to that work may be far weaker than expected… or may not exist at all. As Stephen Stohn warned, there is currently a "copyright void" for machine-generated content.

The CEO Question: “Can we prove a human created the core value of this asset?”

If you are building a brand or a franchise on AI-generated foundations, you are building a castle on sand. You need a human in the loop, not just for quality, but for ownership.

2. The "ART" Standard

The Reality: Marie-Julie Desrochers proposed a framework that should be the gold standard for our industry: ART.

  • Authorization (Did the model have consent to use the training data?)
  • Remuneration (Were the original creators paid?)
  • Transparency ( Is the use of AI disclosed?)

The CEO Question: “Do our vendors comply withART?”
Don’t just ask if a tool is "efficient." Ask if it’s compliant. If a vendor can’t answer these three questions, they are passing their liability onto you.

3. Indemnification > Efficiency

The Reality: Tejas Shah from Gennie made a brilliant point about the difference between "Consumer AI" and "Enterprise AI." Consumer tools are fun; Enterprise tools are insured.

The CEO Question: “Who pays if we get sued?”

We are moving toward a world where the most valuable feature of an AI tool isn't its creative output, but its legal indemnification. If your team is using open-source tools that scrape data without protection, you are the insurance policy.

4. The "In the Style Of" Ban

The Reality: Prompting an AI to create an image "in the style of [living artist]" or "in the style of [competitor]" is an ethical minefield and a potential legal ticking time bomb.

The CEO Question: “Have we explicitly banned mimetic prompting?”

Your internal policy should be clear: We use AI to automate drudgery, not to mimic the specific creative expression of other humans.

5. The Truth Test

The Reality: There is a growing temptation to use AI to “improve” historical records, whether that be to smooth a grainy 1970s recording, clarify a blurred image, or enhance an imperfect transcript. But these alterations don’t just clean up the past; they subtly change it. When factual accounts are reinterpreted or altered, even with good intentions, they undermine the integrity of the historical record. Over time, these modified versions risk being treated as primary sources. Future AI systems may then train on these inaccuracies, compounding errors and accelerating the spread of misinformation, further eroding our ability to verify what is real and what is not.

The CEO Question: “Are we prioritizing polish over truth?”

Especially for those of us in documentary and non-fiction, our duty to the truth outweighs our desire for a slicker edit.

The Bottom Line

We have an opportunity here. We can either be the victims of disruption or the Architects of Trust.

I believe ethical AI isn’t about perfection or prohibition. It’s about accountability, transparency, and intentional human oversight at every stage.

By setting the "ceiling" for ethics now, rather than waiting for the government to set the "floor", we protect our IP, we respect the creative community, and we build businesses that are sustainable for the long haul.


Related Posts