Google DeepMind and Darren Aronofsky's Primordial Soup have premiered their first AI-hybrid film at the 2025 Tribeca Film Festival.
ANCESTRA, directed by Eliza McNitt, combines live-action footage with sequences generated by Veo, Google DeepMind's advanced video generation model. The deeply personal film explores themes of birth, ancestry, and cosmic connection, using AI to visualize concepts that would be challenging or impossible to capture with traditional filmmaking techniques.
The collaboration represents the first of three planned short films designed to test the boundaries of AI-driven filmmaking while maintaining creative control in the hands of human artists.
Behind the Lens: AI becomes a creative partner rather than a replacement tool
The production workflow for ANCESTRA demonstrates how AI can augment rather than replace traditional filmmaking techniques. McNitt worked with a multidisciplinary team of over 200 experts, including VFX artists, animators, and AI researchers, to create a hybrid production pipeline.
The process began with Gemini analyzing personal archival photos from McNitt's birth, taken by her father. These descriptions became prompts for generating new imagery that felt authentic to her personal story. Imagen generated key concept art defining the overall visual style and mood. Veo animated the generated images and created final video sequences based on text prompts. Traditional VFX techniques were used to composite and refine the final shots.
Motion Control: New capabilities enable precise camera movement and subject tracking
The production pushed Veo's capabilities in several key areas that directly address filmmaking challenges:
Personalized video generation allowed the team to create intimate, story-specific imagery. For scenes requiring a realistic baby in utero, they fine-tuned an Imagen model to match reference images, then used Veo's image-to-video capability to create animated sequences.
Motion matching enabled complex camera movements that would be difficult to achieve with traditional methods. In one sequence showing a journey through the human body, the team created a virtual 3D model, recorded the desired camera path, then used Veo to track and replicate that exact motion in the generated footage.
Live-action integration solved practical challenges around filming with newborns. Using Veo's "add object" capability, the team composited AI-generated babies into live-action footage while maintaining visual consistency.
Workflow Integration: AI slots into existing post-production pipelines
Rather than replacing traditional techniques, ANCESTRA demonstrates how AI can enhance existing workflows. Complex shots combined multiple AI-generated elements with conventional VFX compositing.
For example, a scene capturing the point-of-view from inside a cracking crocodile egg required:
Multiple Veo-generated video elements for organic textures and movement
Imagen-created background imagery for the sunset environment
Traditional VFX compositing to seamlessly blend all elements together
This approach maintains the creative control filmmakers expect while expanding their visual possibilities.
Technical Specs: The production tools behind the experiment
The ANCESTRA production utilized several interconnected AI tools:
Veo serves as the primary video generation engine, capable of creating high-fidelity moving images from text prompts, reference images, or existing footage. The model supports style transfer, motion guidance, and compositional continuity.
Gemini handled prompt development, analyzing reference materials and generating detailed aesthetic descriptions that became the foundation for visual creation.
Flow provided dialogue generation and shot planning capabilities, though these features played a smaller role in ANCESTRA's production.
The tools integrate with standard post-production workflows through APIs, allowing seamless handoffs between AI generation and traditional editing systems.
Industry Context: A measured approach to AI adoption in filmmaking
The ANCESTRA experiment arrives as the film industry grapples with questions about AI's role in creative processes. Unlike some AI implementations that aim to automate entire workflows, this project emphasizes collaboration between human artists and AI systems.
Darren Aronofsky positioned the partnership as the next evolution in filmmaking's relationship with technology: "Filmmaking and technology have always gone hand in hand. This is the next great leap."
The project addresses practical production challenges while maintaining artistic integrity. McNitt described her experience: "AI allowed us to visualize the impossible—while holding onto the soul of the story."
The Final Cut: AI augmentation points toward new creative possibilities
ANCESTRA represents a significant data point in the ongoing conversation about AI's role in filmmaking. Rather than replacing human creativity, the project demonstrates how AI can expand what's possible within existing production frameworks.
The success of this hybrid approach could influence broader industry adoption, particularly for independent filmmakers seeking to achieve high-end visual effects within limited budgets. However, the project's emphasis on collaboration and creative control suggests that human artistry remains central to the process.
As Google DeepMind and Primordial Soup continue their partnership with two additional films, the industry will be watching to see whether this model of AI-human collaboration becomes a template for future production workflows.