Michaela Ternasky-Holland knows that AI isn't making filmmaking easier—it's demanding an entirely different creative approach. The MTH Studio founder has discovered that creating consistent, episodic AI animation requires blending classical animation principles with generative flexibility, and the biggest challenge isn't the technology itself, but finding voice actors willing to work on AI projects.
Watch the full interview here:
Speaking with us at AI on the Lot for our Inside the AI Studio series, Ternasky-Holland shared how she's pioneering AI-generated episodic content through her current project with DreamFlare. Her approach treats AI animation like "composing classical music"—requiring the same precision and planning as traditional animation, but with newfound flexibility in post-production.
Viability Testing: The New Pre-Production Process
Rather than traditional pre-production, Ternasky-Holland has developed what she calls "viability testing"—a systematic approach to proving AI-generated elements can work consistently across episodes.
As much as AI is a very fluid process, you still need pipelines for that fluidity to move through. So instead of thinking about something like pre-production, I'm using the idea of viability testing.
Her process breaks down into distinct phases:
Viability testing phase: Testing character designs, environments, and animation styles for repeatability
Viability execution: Creating short episodes (30 seconds to one minute) using proven elements
Traditional post-production: Bringing in specialists for sound mixing, score composition, and SFX
The key insight? Unlike traditional animation where locked scenes cost hundreds of thousands to reanimate, AI workflows allow creators to regenerate entire sequences without massive financial consequences.
Script Supervisor Shortage: Managing Consistency Across Episodes
Creating episodic content with AI presents unique continuity challenges that don't exist in traditional animation workflows.
The biggest thing is consistency. If someone were to ask me, what role do you wish you had in the project? I would say I wish I had a script supervisor. Having to do that work on top of directing has actually been almost like a secondary role that I've taken on.
Ternasky-Holland manages consistency issues like:
Character design variations between shots and episodes
Environmental continuity across different angles of the same scene
Maintaining dramatic consistency in high-fantasy content
While emerging tools like World Labs are developing generative 3D spaces from 2D images, she notes the cleanup work required makes these approaches not yet viable for production pipelines.
Casting Challenges: Finding AI-Willing Voice Talent
One of the most unexpected obstacles in AI animation production has been voice actor recruitment. Ternasky-Holland faces systematic rejection from casting platforms and individual actors.
I've had my castings being taken down by Backstage. I've had my castings not being responded to on Voice123. Like, I have done my best to find actors, and sometimes I'm like, wow, I literally have to cast the Gen AI because I don't have talent that's willing to do this work.
Her approach involves:
Project-specific buyouts: Exclusive rights for voice synthesis within the specific project only
Traditional recording sessions: Full one-hour voice sessions at standard rates ($200-300)
Transparent contracts: Clear communication about potential voice synthesis usage
ElevenLabs backup: Using AI voices for minor characters when human actors aren't available
The resistance stems from actors being uncomfortable with voice synthesis terms, even when limited to project-specific usage rather than broader voice ownership.
Animation vs. Live Action: Why AI Suits Traditional Animation Thinking
Ternasky-Holland argues that AI generation aligns more closely with animation principles than live-action filmmaking approaches.
Animation is like composing classical music, right? Animation is like we are perfecting everything to the point where then we execute the animation. And it's a very different kind of way of playing than I think live action, more like jazz.
This classical composition approach requires:
Astringent consistency in backgrounds, characters, animation styles, and color
Human oversight for believability factors that machines don't inherently understand
Pre-planned precision rather than improvised on-set creativity
Detailed storyboarding and shot lists that account for generative variations
The advantage over traditional animation? Complete flexibility to regenerate and reanimate entire scenes without budget constraints.
The Final Frame: Computing Power as the Next Innovation Barrier
While much attention focuses on live-action AI applications, Ternasky-Holland sees animation as an underrepresented but crucial part of the AI filmmaking conversation. Her work demonstrates that generative technology isn't rapidly improving—we're just getting better at controlling consistency.
What's standing between us and kind of a next level of innovation is literally the computing power. The generative technology's not getting better. We're just, we just think it's getting better because the percentage of control or the percentage of consistency has gotten better.
As AI animation tools plateau in capability, the real innovation opportunity lies not in training better models, but in accessing the computing infrastructure needed to make current tools more viable for professional production workflows.