Netflix acquired Ben Affleck's stealth AI company InterPositive, Corridor Crew released an open-source green screen keyer, and Lightricks shipped LTX-2.3 with a full desktop video editor. Three very different approaches to the same question: where does AI actually fit into filmmaking workflows?

Quick Take

The episode captures three distinct strategies for AI in production. Netflix is building internal capability to train custom models per-production. Corridor Crew solved a specific, painful problem (green screen keying) with open-source code. Lightricks is giving away a full nonlinear editor with AI generation built in. Each approach reflects a different bet on what filmmakers actually need: production-specific tools, targeted solutions to discrete problems, or integrated workflows that collapse the generate-download-edit cycle.

What We Explored: Netflix's Custom Model Strategy

Netflix announced the acquisition of InterPositive, the AI filmmaking company Ben Affleck founded in stealth mode starting in 2022. The deal brings Affleck on as Senior Advisor and integrates the entire InterPositive team into Netflix's product and technology organization.

The core idea: InterPositive doesn't build a general-purpose text-to-video model. Instead, it trains custom models specific to each production, using footage from that production's own dailies. Affleck described the workflow in Netflix's announcement: "Together with a small team of engineers, researchers and creatives, I began filming a proprietary dataset on a controlled soundstage with all the familiarities of a full production. I wanted to build a workflow that captures what happens on a set, with vocabulary that matched the language cinematographers and directors already spoke."

What the model actually does: Once trained on a production's footage, the model handles post-production tasks like wire removal, relighting, background replacement, and angle adjustments. The key constraint: it's not trying to generate novel content from text prompts. It's modifying footage that already exists, based on the visual language established by that production's actual shooting.

Why Netflix acquired it: The company has been exploring AI in production for years. Elizabeth Stone, Netflix's Chief Product and Technology Officer, said in the announcement: "The InterPositive team is joining Netflix because of our shared belief that innovation should empower storytellers, not replace them." Netflix has the data, the infrastructure, and the creative relationships to make this work at scale. The acquisition signals that Netflix is moving beyond experimenting with AI tools to building internal capability.

The strategic angle: This isn't a Netflix-branded model. It's a model-per-production approach. Each film gets its own trained version, living in its own box. That's different from the failed Lionsgate-Runway partnership, which tried to build a single "Lionsgate look" across all the studio's diverse productions. Netflix's approach is more practical: the model learns what this specific production looks like, then helps the filmmakers do more with what they've already shot.

The open question: We haven't seen a single frame of output from InterPositive. The technology is proven internally, but the real test comes when it's deployed across Netflix's production pipeline. Will it actually save time and money? Will filmmakers trust the results? Those answers matter more than the acquisition itself.

What We Tested: CorridorKey's Green Screen Solution

Corridor Crew released CorridorKey, an open-source neural network for chroma key that solves a specific, painful problem in VFX: the unmixing challenge.

The problem it solves: When you shoot against green screen, the edges of your subject inevitably blend with the green background. Traditional keyers struggle to untangle these colors, forcing VFX artists to spend hours building complex edge mattes or manually rotoscoping. Even modern "AI Roto" solutions typically output a harsh binary mask, destroying the delicate, semi-transparent pixels needed for realistic composites. Hair, motion blur, transparent objects, soft edges — all of these become nightmares.

How CorridorKey works: You feed it a raw green screen frame and a coarse alpha hint (a rough black-and-white mask). The neural network completely separates the foreground object from the green screen. For every pixel, even the highly transparent ones, the model predicts the true, unmultiplied straight color of the foreground element alongside a clean linear alpha channel. It doesn't guess what's opaque and what's transparent; it actively reconstructs the color of the foreground object as if the green screen was never there.

The technical specs: CorridorKey outputs 16-bit and 32-bit Linear float EXR files — VFX-standard formats that integrate directly into Nuke, Fusion, or Resolve. Resolution is also VFX-friendly; the model handles 4K plates dynamically. The catch: it requires an NVIDIA GPU with at least 24GB of VRAM (like a 3090, 4090, or 5090).

Why this matters for production: Green screen infrastructure is everywhere. It's cheaper than building gray screens or LED volumes. CorridorKey lets productions keep using green screen while AI handles the hard part — the unmixing. Addy's point in the episode: "There's a lot of green screen infrastructure in the world. We're going to continue to shoot green screen for a long time. It costs extra to paint it gray and production is not going to pay for that. They're going to just have AI figure it out."

Free and open source: CorridorKey is released under a Creative Commons license. You can use it for commercial projects, modify it, and contribute improvements. The only restriction: you can't repackage it and sell it, and any variations must remain free and open source.

The bigger pattern: This is part of a wave of filmmakers and VFX artists building their own models to solve specific problems. Corridor Crew has the expertise, the audience, and the motivation to do this. Not every tool needs to be a consumer product or a SaaS platform. Sometimes the answer is: release the code, let the community improve it, and move on to the next problem.

What We Questioned: LTX-2.3 and the Desktop Editor

Lightricks released LTX-2.3, a major update to its open-source video generation model, and launched LTX Desktop, a full nonlinear video editor with AI generation built in.

LTX-2.3 improvements: The update focuses on detail, motion, and audio. Sharper fine detail through a rebuilt latent space and updated VAE. Stronger image-to-video with less freezing and more real motion. Cleaner audio with filtered training data and a new vocoder. Native portrait format up to 1080×1920, trained on vertical-orientation data, not cropped from landscape. The model generates up to 20 seconds at 4K/50 FPS with synchronized audio.

LTX Desktop: This is the part that caught attention. It's a full open-source nonlinear editor with local AI generation built in. You can edit footage, then instantly generate clips on your timeline using text-to-video, image-to-video, audio-to-video, or video edit modes. Local generation requires a Windows NVIDIA GPU with 32GB+ VRAM; macOS runs in API mode.

Why this workflow matters: The traditional cycle is broken: generate a clip, download it, bring it into your editor, realize you need something different, go back and regenerate. LTX Desktop collapses that. You're editing and generating in the same tool. If you need more shots, you generate them on the fly. If the generated clip doesn't work, you adjust and regenerate without leaving the editor.

The open-source bet: Lightricks is giving away professional-grade software. Addy's reaction in the episode: "Dude, that's crazy that they're just giving this away. This is like good IP." The company is betting that by releasing the model weights and the editor, they'll build a community, get feedback, and create a platform that other tools can build on. They're not trying to lock users into a proprietary interface.

The comparison: You wouldn't see OpenAI or Google release something like this. An application-level tool with local AI generation is valuable IP. Lightricks is taking a different approach: release it, let people build on it, and own the underlying model.

The limitation: LTX Desktop is in beta. The local generation requirements are steep (32GB VRAM on Windows). But the direction is clear: the future of nonlinear editors might be editors that can generate on demand, not editors that import pre-generated clips.

Bottom Line: Three Bets on Where AI Fits

These three stories represent three different answers to the same question: where does AI actually belong in filmmaking?

  • Netflix's InterPositive approach assumes the future is production-specific models. Train on your footage, get tools built for your production, keep creative decisions in human hands. It's a bet on custom capability, not generic tools.

  • Corridor Crew's CorridorKey solves a discrete, painful problem with open-source code. No SaaS, no subscription, no proprietary lock-in. Just: here's the code, here's what it does, improve it if you can. It's a bet on targeted solutions.

  • Lightricks' LTX Desktop collapses the generate-edit cycle into one tool. It's a bet that the future of editing is editing + generating in the same interface, with the model weights available for anyone to run locally.

Which approach wins may depend less on the technology itself and more on what filmmakers actually need. Netflix's approach requires trust in a new workflow. CorridorKey requires the right hardware and technical skill. LTX Desktop requires 32GB of VRAM and comfort with beta software. Each one is solving for different constraints.

Companies & Announcements:

Tools & Open Source:

Reply

Avatar

or to participate

Keep Reading