At Toronto International Film Festival's Spotlight: AI on the Roof panel, Adobe and Moonvalley executives made their case for commercially safe AI models, while creators grappled with a fundamental tension: how to harness AI's creative potential without risking legal exposure or alienating talent.

The panel—featuring Hannah Elsakr (VP of Global GenAI at Adobe), Ed Ulbrich (Moonvalley's head of strategic growth), filmmaker Ryan Patterson, and Transitional Forms founder Pietro Gagliano—revealed an industry at a crossroads between experimental tools and production-ready workflows. The discussion was moderated by Todd Terrazas, executive director of AI LA and co-founder of AI on the Lot.

The Commercial Viability Problem

Ulbrich, a 35-year visual effects veteran who helped build Digital Domain, drew a clear line between experimental AI filmmaking and work that can actually sell.

"I had a fork in the road where all these video models are surging into the marketplace," Ulbrich said. "There's incredible technology, amazing stuff, but it is from the internet. It is largely basically stolen pixels. If you want to use these big video models, I'm relegated to making TikToks. There's no distribution. I can't sell that at market. I can't take this and sell it to Netflix or Amazon."

His move to Moonvalley stemmed from this dilemma.

"When Moonvalley popped up on my radar, it's like someone's gonna make a clean model. That to me is the largest unlock of value in our world right now, because suddenly we can now use this power of technology without violating copyright."

Ed Ulbrich, Head of Strategic Growth

Adobe's Elsakr echoed this position, explaining that commercially safe models were a "choice point" for Adobe's leadership when deciding how to approach generative AI. "We work with every creator in the universe—that's the community we've built our business around. We also work with 85% of the Fortune 2000 and they care about not getting sued by anyone."

The Prompt Limitation

Ulbrich challenged the idea that prompting alone could serve as a professional filmmaking tool.

"Prompting to make a movie to me is the same as making a movie with a ball," he said. "It's not a precise enough tool in professional filmmaking when it's a team sport and you're getting notes from David Fincher. These are notes that come with laser precision, and the expectation is that you'll address those notes with laser precision."

His solution: AI-powered versions of traditional VFX tools—roto brushes, compositing software, camera controls—that provide the precision professionals need. "We're not trying to do this to Hollywood. We're doing it with them and for them. We need to make tools that people use, that have biomechanical feedback like rigging and a stylus, modeling a camera."

Both Moonvalley and Adobe are building custom model services, allowing studios and directors to train proprietary models on their own IP. Elsakr described directors approaching Adobe to train Firefly on their specific shot styles and camera work. "Lots of directors have come to us and said, you have a commercially safe model—would you train that model on my IP, my shot style, the way I push in? I have all this data."

The Speed Unlock

Patterson, whose short film Dreamer premiered at the event, used Adobe Firefly to create a semi-autobiographical film in less than two weeks. "Everything from sound effects to editing in Premiere—it came together in weeks to do this."

For Patterson, the value isn't just speed—it's iteration. "Being able to quickly iterate on a new idea, whether that's an AI trailer or a short—I think being able to just get ideas out to people very quickly, that's certainly new."

Elsakr positioned this acceleration as critical for keeping up with audience demand. "People are up to 12 hours of video consumption in different forms a day. Every franchise needs shoulder content for social, whether it's TikTok, Instagram, or YouTube. Everyone needs to be out there because we're all demanding fresh content."

She pointed to Netflix's work on Korean content: "The team on K-pop Demon Hunters—nobody saw that coming. But that team is like, we are up 24/7 pushing out content because there's a concert coming."

Guardrails and Control

Both Moonvalley and Adobe emphasized built-in restrictions. Ulbrich's model has never seen celebrity likenesses, violence, or adult content—guardrails that can be selectively disabled for enterprise clients with proper licensing.

"We get complaints online from our consumer people: 'I can't make Spider-Man fight Batman.' No, you can't. That's right," Ulbrich said. "Our model's never seen Spider-Man or Batman or any celebrity likeness."

Elsakr stressed that talent agreements need updating.

Just because studios have archives doesn't mean they have rights to train on likeness. There's a studio agreement now. Look at your T's and C's—make sure that if you want to participate, great. But then they pay.

Hannah Elsakr, VP of Global GenAI at Adobe

The Remix Culture Tension

Gagliano pushed back against the panel's emphasis on IP protection, arguing the industry is fighting yesterday's battles.

"I'm worried that we're looking at IP from a legacy point of view, from the time when films and TV and music came out of localized distribution. All that model is broken, if not breaking," he said. "By the time we figure out protectivity of the original creator or the studio, we're gonna be living in a world where remix culture is going crazy. The audience will be the authors themselves—they'll be owners of the content."

Elsakr partially agreed, noting that Adobe had worked with Paramount at Comic-Con to create controlled fan participation: "People wanna personalize whatever they wanna personalize. You can also create guardrails—you don't have to let people prompt whatever they want. You can keep it with negative prompting in the realm of what you feel is on brand for your franchise."

New Vocabulary Needed

Ulbrich argued the industry needs new definitions as AI blurs traditional roles. "We're going to need a new vocabulary, glossary of terms. What is animation? That's a whole conundrum."

He predicted mainstream Hollywood adoption within 12 months: "We're going to start to see major household name studios and visual effects and animation companies using generative AI at scale on Hollywood IP. It's not gonna start out by generating the entire movie—that'll take a few more years. We think it's gonna start in visual effects and animation."

The shift from secrecy to openness will mark the change, he said. "Some people ask, what's your job like? I'm like, it's kind of like a drug dealer. No one's admitting they're using it in Hollywood, but behind the scenes—can I get some clean stuff? Everybody wants some. What we're gonna see in the next 12 months is that stigma's gonna start to dissipate as clean models start to become pervasive."

Gagliano predicted the industry will finally name the emerging medium combining generative AI, interactivity, and real-time rendering. "If anyone knows the name of it, please tell me, because it would shorten my pitch a lot."

Elsakr offered three predictions: the "AI lot of the future" (a reimagined Hollywood production ecosystem), "fix it in pre" (using AI's power in pre-production rather than post), and "gatekeepers be gone" (democratized access to production tools).

Embracing the Unexpected

Patterson described AI's unpredictability as a creative feature rather than a bug. "Sometimes you ask them to go right and they go left. I've chosen to sort of lean into that unpredictability. Let's explore what's over there instead. Sometimes it leads to really interesting places. I find leaning into the weirdness has been fun."

Ulbrich shared an accidental discovery that became a Moonvalley feature: stick figure annotations automatically generating full video storyboards. "Someone was drawing quick sketches, scribbling some stick figures, and they thought, 'I wonder if...' This is a man, this is a mountain, that's a kayak. And that made it into the render. It read the annotation written on the stick figures and just filled in the picture."

Gagliano described creating a "weirdness policy" because language models had become too predictable. "The models are so predictable that if you're like, 'oh, it's super abstract and weird,' rainbow unicorn. They're almost too predictable already. We had to create a specific policy for weirdness to be like, don't say rainbow unicorn—you're punished."

The panel took place at TIFF's 50th edition as part of AI on the Lot's programming, sponsored by Adobe and Moonvalley.

Reply

or to participate

Keep Reading

No posts found