Powered by

Welcome to VP Land! Major studios and platforms are making big AI bets this week, from Disney's billion-dollar OpenAI partnership to Meta tapping ElevenLabs for voice tech.

In today's edition:

  • Disney goes all in on AI with Sora

  • McDonald's AI Christmas fail

  • Beeble ships local GPU relighting app

  • sync.'s new react-1 model edits character performances

Disney and OpenAI strike $1B Sora deal

Disney and OpenAI reached a three-year licensing and partnership agreement that makes Disney the first major content licensing partner on Sora, OpenAI's short-form generative video platform, with Disney investing $1 billion in OpenAI equity and receiving warrants for additional shares.

  • 200+ licensed characters - Sora will generate short, user-prompted social videos using more than 200 animated, masked, and creature characters from Disney, Marvel, Pixar, and Star Wars, including costumes, props, vehicles, and iconic environments. ChatGPT Images will also generate images from the same IP library. The agreement explicitly excludes any talent likenesses or voices.

  • Disney+ integration - Curated selections of Sora-generated fan videos will stream on Disney+, and OpenAI will collaborate with Disney to build new Disney+ subscriber experiences using OpenAI's models. Sora and ChatGPT Images are expected to start generating Disney character content in early 2026.

  • Enterprise deployment - Disney will become a major OpenAI customer, using OpenAI's APIs to build new products and tools for Disney+ and deploying ChatGPT internally for employees. The financial structure includes a $1 billion equity investment plus warrants, subject to definitive agreements and corporate approvals.

  • Safety and rights framework - Both companies committed to maintaining controls to prevent illegal or harmful content generation, respect content owner rights in model outputs, and protect individual voice and likeness rights. OpenAI pledged to implement age-appropriate policies and safety controls across the service.

  • Included characters - The licensed library spans Mickey Mouse, Minnie Mouse, Lilo, Stitch, Ariel, Belle, Cinderella, Simba, Mufasa, plus characters from Encanto, Frozen, Inside Out, Moana, Monsters Inc., Toy Story, Up, Zootopia, and animated versions of Marvel and Star Wars characters including Black Panther, Captain America, Deadpool, Groot, Iron Man, Darth Vader, Luke Skywalker, Leia, the Mandalorian, and Yoda.

This is one of the first studio-sanctioned integrations of blockbuster IP into a consumer-facing generative video platform. The deal establishes a parallel track of licensed, legally cleared AI content creation differentiated from the current landscape where most AI video tools avoid officially partnering with major studios.

Disney's positioning as both a major customer and equity investor suggests deeper technical integration: AI-assisted tooling for Disney+ experiences, internal production workflows, and marketing pipelines built on OpenAI APIs.

SPONSOR MESSAGE

Eddie AI: Edit a Rough Cut in 15 Minutes

Eddie AI is a professional-grade, AI-powered video editing assistant built for creators, editors, and teams who want to streamline their workflow without sacrificing quality.

It automates tedious tasks like cutting interviews, scripting content, logging A/B rolls, organizing assets, handling multicam podcast edits, and generating social clips. Users can even interact with transcripts through prompts to quickly find answers within their footage.

New features this week:

  • Agentic story development - Feed Eddie a URL in Rough Cut mode, and it automatically pulls key messages, brand positioning, and background context into the edit—so early cuts are aligned with client narratives from the start. (Treatment input coming soon!)

  • Extended rough cuts - Rough cut limit now stretches to 40 minutes, giving YouTubers, documentarians, and webinar creators room to edit long-form projects in one pass without external workarounds.

  • Smarter B-roll logging - Eddie now analyzes both visuals and dialogue in B-roll, making background footage searchable by what’s seen and said—surfacing contextual moments that often get missed.

Ready to see how much faster your edits can be? Try Eddie AI today and experience a smoother, smarter workflow that keeps you focused on the story, not the slog.

McDonald's Netherlands Removes AI-Generated Holiday Humbug Spot

McDonald's Netherlands pulled a 45-second AI-generated Christmas spot from its YouTube channel three days after launch, releasing a statement that described the campaign as "an important learning" as the company explores "the effective use of AI."

The spot was created by Dutch agency TBWA\Neboko and US production company The Sweetshop, released December 6, and removed December 9. Sweetshop CEO Melanie Bridge told Futurism the team spent "seven weeks" generating "thousands of takes" and editing them together, defending it as "a film, not an AI trick."

AI relighting goes local with Beeble Studio

Beeble launched Beeble Studio, a new desktop app that runs its SwitchLight 3.0 AI relighting model directly on your GPU, bringing 4K video-to-PBR relighting, unlimited local rendering, and multi-channel 16-bit EXR outputs to Windows workstations.

  • On-prem processing - The app runs entirely on local GPUs, keeping footage in-house for studios managing sensitive projects. No uploads, no cloud bottlenecks, no credit limits.

  • 4K relighting with full PBR outputs - Renders sequences up to 1 hour (100,000 frames) at 4K resolution. Exports multi-channel 16-bit EXRs with complete PBR AOV passes: normals, base color, roughness, metallic, specular, alpha, and depth in a single file. Imports directly into Nuke, Blender, and Unreal Engine.

  • SwitchLight 3.0 model - The new version processes multiple frames simultaneously for flicker-free, temporally consistent results. Trained on a dataset ten times larger than before, with sharper facial definition, better surface textures, and improved background stability.

  • Built-in tools - Includes a render queue for batch jobs, deflicker controls, and the Beeble Editor, a 3D editor for real-time relighting with physically accurate HDRI and point lights.

  • Pricing - Indie plan runs $500/year or $60/month. Standard plan is $3,000/year or $400/month for full commercial use.

sync. Launches react-1 for Performance Editing

sync. launched react-1, a ten-billion-parameter masked video diffusion model that lets editors change an actor's on-screen performance—emotion, facial expressions, head movement, and timing—without reshoots, using new audio and guided emotional direction.

  • Beyond lip sync - react-1 "learns from your uploaded audio and reanimates the entire face," editing facial expressions, head movements, and timing across the performance, not just lip sync. You upload existing footage, provide new audio, guide the emotional direction, and the model generates a new performance while maintaining the actor's identity and style.

  • Emotion-level control - The interface offers selectable emotional reads like surprised, angry, disgusted, sad, happy, and neutral, letting editors "explore different reads with the click of a button" and change the emotional beat of a performance in post.

  • Localization angle - sync. positions this for dubbing workflows, claiming react-1 "doesn't just localize the lines it localizes the entire performance," targeting global content pipelines where emotional performance needs to match new language or script changes.

  • Post-production integration - Built on sync.'s existing lip sync API infrastructure, react-1 works "on any video content in the wild—across movies, podcasts, games, and even animations" and is available via API and self-serve web product with a "Start for free" onboarding flow.

Watch how immersive previs uses CG tools to align teams and guide confident creative decisions for The Weeknd’s “Open Hearts” Apple Immersive Video.

Stories, projects, and links that caught our attention from around the web:

🌌 Pearson taps Vū’s immersive LED tech to prototype "experiential education" with real-time simulations.

📚 VES releases Fourth Edition of its industry-standard handbook on December 18 with new AI, NeRFs, and virtual production chapters from 95 experts

🎨 Maxon One adds GPU fluid sim and real-time Redshift displacement for motion graphics and Unreal pipelines

📱 Mavis Camera adds Film Kit with custom LUT workflows and Open Gate capture for $9.99 in-app purchase

🎙️ ElevenLabs partners with Meta to power AI audio across Instagram, Horizon, and Meta AI—dubbing Reels in 70+ languages and generating character voices

🖼️  Leaked OpenAI image models appear on LM Arena under codenames Chestnut and Hazelnut showing sharper rendering and better text-in-image handling than current DALL·E 3

Addy and Joey break down Disney’s $1B investment in OpenAI to bring Disney characters to Sora, the McDonald’s Netherlands AI ad controversy, OpenAI’s new image models, and sync’s react-1 tool.

Read the show notes or watch the full episode.

Watch/Listen & Subscribe

📆 Upcoming Events

April 18-22, 2026
NAB Show
Las Vegas, NV

July 15-18, 2026
AWE USA 2026
Long Beach, CA

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate

Keep Reading

No posts found