Powered by

Welcome to VP Land! The line between “AI company” and “production company” keeps blurring — and it’s coming from both directions.

In today’s edition:

  • Netflix acquires Affleck’s filmmaker-built AI startup

  • Luma compresses $15M campaigns to $20K with Agents

  • LTX–2.3 generates video with native audio

  • Corridor Crew open-sources neural green screen keying

Netflix Acquires Affleck’s AI Startup

Netflix acquired InterPositive, the AI filmmaking technology company founded by Ben Affleck in 2022. The entire InterPositive team joins Netflix, with Affleck coming on as Senior Advisor. Financial terms were not disclosed.

  • Post-production focus, not synthetic performances. InterPositive’s tools work with footage from a production’s own shoots to handle continuity fixes, lighting adjustments, and background replacements. The company does not make AI actors or generate synthetic performances.

  • Trained on real filmmaking. Affleck filmed a proprietary dataset on a controlled soundstage with full production setups. The model was “trained to understand visual logic and editorial consistency, while preserving cinematic rules under real-world production challenges such as missing shots, background replacements or incorrect lighting.”

  • Built-in creative restraints. Tools include guardrails to protect creative intent, keeping decisions in the hands of artists. Affleck described “deliberately smaller datasets and models focused on filmmaking techniques, rather than performances.”

  • Netflix leadership framing. Elizabeth Stone (CPO): “purpose-built for filmmakers and showrunners.” Bela Bajaria (CCO): tools should “expand creative freedom, not constrain it or replace the work of writers, directors, actors, and crews.”

Netflix has already used generative AI for special effects in some original content. 

SPONSOR MESSAGE

Descript is an easy to use text based video editor for podcasts and talking head videos.

But we've been using it as part of our longer video editing process with Resolve: paper edits.

Sorting through hours of interview footage and building out a radio edit is a pain.

With Descript, I edit videos as quickly as editing a document. I import my footage, get instant transcriptions, and search through hours of content in seconds.

When I hear something I like, I highlight the text and add it to my rough-cut composition. Then I build everything out, copy and paste soundbites like I would with a text document - but the video updates too.

I can also add temp VO, scene notes, and comments. The whole team can be on the same project, working on this in real-time.

When we're done, we export an XML from Descript and bring it into our NLE of choice.

Yes, most editing apps have added some transcription support, but IMO, none of them come close to the speed and ease of use in working with text in Descript.

Luma Launches Unified AI Model

Luma launched Uni–1, a model that combines visual understanding and generation in a single decoder-only autoregressive transformer. Alongside Uni–1, the company debuted Luma Agents, an agentic platform for end-to-end creative work across text, image, video, and audio.

  • Single architecture for reasoning and rendering. Uni–1 represents text and images in one interleaved sequence, allowing the model to reason about a scene before generating it. Achieves leading performance on RISEBench and ODinW–13.

  • Multi-model orchestration. Luma Agents coordinate with Ray 3.14, Google Veo 3, Nano Banana Pro, ByteDance Seedream, and ElevenLabs.

  • Production case study. A $15 million year-long ad campaign localized for multiple countries in 40 hours for under $20,000, passing the brand’s internal quality controls.

  • Enterprise adoption. Publicis Groupe, Serviceplan, Adidas, Mazda, and Humain are already using the platform. Available via API at lumalabs.ai/uni–1.

We previously covered Luma’s push into professional production, including our interview with CEO Amit Jain and the Hollywood Dream Lab launch.

MacFarlane Used AI for Bill Clinton

Seth MacFarlane used AI to transform himself into Bill Clinton for Episode 5 of Ted Season 2 on FX, according to an AP report. MacFarlane’s team tried prosthetics and traditional CGI first: “We tried prosthetics, we tried traditional CGI, everything else just looked terrifying.”

MacFarlane called it “an interesting example” of AI as a production tool — reached for after conventional methods failed, not as a first choice.

Open-Source Video and Keying Tools

Three open-source releases landed for video production and VFX workflows.

LTX–2.3 from Lightricks adds native audio generation — sound effects, ambient noise, and dialogue — directly into its open-source video model. The update also introduces native 9:16 vertical format and generates up to 20 seconds at 4K/50 FPS. Available on fal.ai at $0.04/second, Replicate, ComfyUI (free local), and ltx.io, all under Apache 2.0.

LTX Desktop is an open-source app for local video generation tool plus video editor, supporting text-to-video, image-to-video, audio-to-video, and video edit modes. Local generation requires a Windows NVIDIA GPU with 32GB+ VRAM; macOS runs in API mode. Currently in beta.

CorridorKey is a neural network for green screen keying by Niko Pueringer of Corridor Crew. It predicts true straight color and clean linear alpha for every pixel, including motion blur and soft edges. Outputs 16-bit and 32-bit Linear float EXR for Nuke, Fusion, and Resolve. Requires an NVIDIA GPU with 24GB+ VRAM.

At the VES Awards, we asked Martin Hill, Joseph Kosinski, Corridor Crew, Adam Savage, Jerry Bruckheimer, Richard Taylor, and more what they actually think about AI.

Stories, projects, and links that caught our attention from around the web:

🎮 Ramen released Aura 12.0 Beta, a multi-agent AI assistant for Unreal Engine featuring Dragon Agent for multi-step editor tasks and Telos 2.0 for Blueprint generation 10x faster.

🌍 A new Production Capture Network launched as a global alliance for environmental scanning, building a worldwide network for location and set capture.

🤖 Xicoia, the studio behind AI actor Tilly Norwood, hired former Amazon Prime Video exec Mark Whelan to lead the Tillyverse expansion, a digital universe where AI characters collaborate and build careers.

On the Denoised podcast, we tested Nano Banana 2 against Pro across multiple scenarios — real-world knowledge, product photography, and web search — and covered the AI agent moves from Anthropic and Perplexity.

Read the show notes or watch the full episode.

Watch/Listen & Subscribe

👔 Open Job Posts

Director of Virtual Production - Los Angeles, CA
Virtual Production Producer - Seoul, South Korea
Virtual Production Technician - Seoul, South Korea
Eyeline Studios

Unreal Engine Operator - Riyadh, Saudi Arabia
Motion Capture Systems Technician - Riyadh, Saudi Arabia
Pixomondo

AI Video Producer and Editor - Seattle, WA
Amazon (T&C Creative Services)

Sr. Design Technologist, Elevated Shopping - New York, NY / Seattle, WA
Amazon

Virtual Production Supervisor - Dubai, UAE
Garage Studio

📆 Upcoming Events

March 8
40th Annual ASC Awards
Los Angeles, CA

March 9
GDC Festival of Gaming 2026
San Francisco, CA

March 12
SXSW 2026
Austin, Texas

April 18
NAB Show 2026
Las Vegas, NV

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

Avatar

or to participate

Keep Reading