
Welcome to VP Land!
In this edition:
Aronofsky's "On This Day... 1776" launches on TIME with SAG-AFTRA actors and DeepMind visuals
Google’s Project Genie lets you explore AI-generated 3D environments in real time
Martini brings cinematography controls to AI video production
What is a LoRA? Addy explains in plain terms
Kling 3.0 announced with native 4K and audio generation

Aronofsky's AI Revolutionary War Series Launches on TIME

Director Darren Aronofsky's AI venture Primordial Soup launched On This Day... 1776, a short-form animated series using Google DeepMind's AI for visuals while employing SAG-AFTRA union actors for all voice performances. The project represents a high-profile test case for artist-led AI filmmaking that maintains traditional labor standards.
AI generates visuals, humans handle creative roles. Google DeepMind's technology creates the imagery, but SAG-AFTRA actors perform all dialogue, a traditional writers' room led by Lucas Sussman develops narratives, composer Jordan Dykstra scored the series, and a full post-production crew handles editing, sound mixing, and color grading.
TIME Studios distributes on YouTube. Episodes drop weekly throughout 2026, each timed to the 250th anniversary of the Revolutionary War event it depicts. The first two episodes premiered January 29: "The Flag" dramatizes the Grand Union Flag raising on Prospect Hill, while "Common Sense" follows Benjamin Franklin encouraging Thomas Paine to write his famous pamphlet.
This continues the DeepMind filmmaker partnership. Aronofsky's studio previously produced Ancestra, which we covered in detail when it premiered at Tribeca 2025. Google DeepMind most recently premiered an AI short at Sundance 2026.
Read more for industry reaction to the visual quality and what this hybrid approach means for filmmakers evaluating AI tools.
SPONSOR MESSAGE
Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.
Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.
No hype. No jargon. Just results.

Google Opens Project Genie: Explorable AI Worlds

Google DeepMind has opened Project Genie to AI Ultra subscribers in the US. Unlike AI video tools that generate footage you watch, this is world generation: you create interactive 3D environments from text prompts or images, then explore them in real time as the world generates around you.
Create any environment from a prompt or reference image. Describe your world, define your character, and choose your exploration mode (walking, flying, driving, or anything else). Nano Banana Pro lets you preview and tweak the starting image before you jump in.
Navigate in real time as the AI generates the world around you. Move through environments at 20-24 fps at 720p resolution; the world reacts to your movements and actions as you explore.
Persistence and consistency across your session. Turn around and return to an area you visited 30 seconds ago and the model remembers what was there. Paint on a wall and it stays when you look away and come back.
Remix existing worlds or start from the gallery. Branch other creators' worlds by modifying their prompts, explore curated examples for inspiration, and download videos of your explorations to share.
Three systems power it under the hood. Genie 3 is DeepMind's autoregressive world model; Nano Banana Pro handles image generation for previews; Gemini interprets your prompts.
Current limitations are real. 60-second session cap, physics inconsistencies where characters walk through walls, noticeable control lag, and no promptable events like weather changes mid-session yet.
Read more for DeepMind's AGI reasoning behind world models and the competitive landscape with World Labs, Runway, and AMI Labs.

Martini: A Cinematographer's Take on AI Video Tools

Y Combinator-backed Martini has launched a collaborative workspace designed to give filmmakers professional camera controls over AI-generated video. Founded by cinematographer Koh Terai, the platform positions itself as "Figma, but for generative film," targeting production teams and agencies rather than individual creators generating one-off clips.
Step into generated scenes to compose shots. Virtual camera positioning lets you place yourself in the environment and frame shots using cinematography principles like focal length, camera height, and movement, rather than describing what you want in a text prompt.
Lens selection and movement controls replace prompting. Choose your lens, set your camera moves, reframe and reshoot within existing images or video footage without re-generating from scratch.
Model-agnostic with transparent per-second pricing. Access multiple AI video generators in one interface including Veo, Kling, Sora, and Minimax; choose based on quality needs and budget.
Built for team workflows. Real-time collaboration for sharing prompts, edits, and feedback; built-in timeline for rough assemblies; XML export to move projects into Premiere, Resolve, or other professional editing software.
Read more for the full pricing breakdown across all supported models and use cases from beta testers.
How to Generate Transparent Backgrounds in Ideogram

Ideogram now supports native transparent background generation directly via prompt, eliminating the need for post-generation background removal.
Add "transparent background" or "PNG with transparency" to your prompt. The model will generate the image with no background from the start.
Works best with isolated subjects. Product shots, logos, characters, and graphics elements generate cleanly; complex scenes with multiple subjects may have issues.
Output format is PNG with alpha channel preserved. Ready to drop into your compositing or design software.
Use cases include compositing elements and product photography. Also useful for graphic design assets and overlay elements for video.

Digital Cloud Labs walks through an AI face texture replacement workflow using Resolve's Fusion page. The technique uses Fusion's face mesh to track facial geometry, then composites AI-generated textures onto the tracked face for aging, de-aging, creature effects, or skin correction without prosthetics.

Stories, projects, and links that caught our attention from around the web:
📚 Anthropic's "Project Panama" allegedly scanned and destroyed millions of books to train Claude, according to the Washington Post
🎨 AI creative platform FLORA raised $42M Series A to build a unified canvas connecting multiple generative AI models into production workflows
🤖 The NYT compares AI "actress" Tilly Norwood to Aki Ross from Final Fantasy: The Spirits Within, noting history may be repeating itself
🎬 Kling AI announced version 3.0 with native 4K resolution, audio generation, and multi-shot storytelling capabilities

Addy breaks down what LoRAs are in AI image generation using a Mexican restaurant metaphor. Learn how they modify AI models without full retraining, when to use them for consistent characters and style control, and how they compare to reference images.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts

📆 Upcoming Events
🆕 February 9
GenAI Happy Hour - Pasadena
Pasadena, CA
🆕 February 11
ARRI ALEXA 35 Bootcamp - AbelCine LA (Feb)
Burbank, CA
🆕 February 20
Production Summit LA 2026
Los Angeles, CA
🆕 February 22
AI International Film Festival - February (Hollywood)
Los Angeles, CA
🆕 March 6
40th Annual ASC Awards
Los Angeles, CA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.



