• VP Land
  • Posts
  • How DeepSeek Could Impact the Film Industry

How DeepSeek Could Impact the Film Industry

OpenAI's Deep Research, Blackmagic's RGBW Sensor

In partnership with

We’ve got a new podcast episode out with co-host Addy, diving into DeepSeek, Blackmagic, US Copyright, and Netflix’s Go-with-the-Flow model.

I’ve been diving into ChatGPT’s new Deep Research tool (also covered in today’s issue). It’s been pretty mindblowing and honestly the most valuable feature I’ve seen in the Pro plan (more valuable than Sora).

Three Ways DeepSeek's AI Advances Could Impact Film Production

In the latest episode of the podcast, we explored several significant developments in the tech and media landscape, including the emergence of DeepSeek, Blackmagic's innovative RGBW sensor, and the implications of the U.S. Copyright Office's stance on AI-generated works.

DeepSeek’s technological advances could reshape how we think about AI in the film industry. Here are the key developments that matter for production professionals:

More Affordable AI Training Could Drive Custom Tools

DeepSeek trained its new AI model for approximately $5 million, a fraction of what companies typically spend on large language models (though this number is debatable). This cost-effective approach opens new possibilities for studios developing their own AI tools.

Lionsgate has already started this trend, partnering with Runway to train AI models exclusively on their IP. Other major studios have begun similar initiatives to maintain control over their content. The reduced training costs DeepSeek demonstrates could accelerate this shift toward custom, studio-specific AI tools.

This also opens up more possibilities with what could be trained and used on NVIDIA’s upcoming DIGITS personal AI computer.

Transparent AI Decision-Making Could Improve AI Outputs

DeepSeek's interface shows users how the AI thinks through problems in real time - a feature that could transform how we interact with generative AI tools. For video generation, this transparency could help creators:

  • Understand why an AI made specific creative choices

  • Identify where the generation process went off track

  • Adjust prompts more effectively based on the AI's reasoning

  • Create more precise outputs by fine-tuning the generation process

This approach to transparency could particularly benefit video generation tools, where understanding the AI's decision-making process helps creators achieve their desired results more efficiently.

New Diffusion Approach Shows Promise for Image Generation

DeepSeek's Janus Pro model takes a unique approach to image generation by separating the visual encoding process. Unlike other models that use the same encoder for both understanding and generating images, Janus Pro uses separate encoders for each task.

This separation could offer:

  • More precise control over image generation

  • Better performance in specific tasks

  • Different optimization possibilities for understanding versus creating content

While current output quality matches existing tools, this architectural change points to new possibilities for how we might handle image and video generation in production pipelines.

What This Means for Production Professionals

These advances signal a shift toward more accessible, transparent, and customizable AI tools for film production. While DeepSeek's current offerings might not immediately replace existing workflows, they demonstrate how quickly the technology continues to evolve.

Production professionals should watch for these features to appear in their preferred tools, particularly the ability to see how AI makes decisions. This transparency could prove especially valuable when generating specific shots or effects, helping creators achieve their vision more efficiently.

SPONSOR MESSAGE

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

The Really Clever Engineering Behind Blackmagic’s RGBW Sensor

Last week we covered how Blackmagic has reduced the barrier to entry for their URSA Cine 12K camera, now offering a body-only option at $6,995. This strategic pricing, combined with innovative sensor technology, signals their push to get this camera into the hands of professional filmmakers.

But there’s also a really significant development in the camera: it’s new RGBW sensor.

The URSA Cine 12K introduces a fundamentally different approach to sensor design. Unlike traditional Bayer pattern sensors with red, green, and blue photosites, Blackmagic adds a white photosite to create an RGBW array. This addition captures more light information and enables a more efficient approach to resolution scaling.

Practical Benefits for Filmmakers

The new sensor design solves several common challenges when shooting at different resolutions:

Traditional cameras typically handle lower resolutions through:

  • Pixel skipping: Using only some sensor pixels, leading to aliasing and moiré

  • Pixel cropping: Using only the center portion of the sensor, changing your field of view

  • Digital downsampling: Processing intensive and can impact quality

Instead, the URSA's RGBW sensor creates "cells" of pixels that work together.

When shooting at lower resolutions like 8K or 4K, these cells combine to create single pixels, providing several advantages:

  • Maintains full sensor width, preserving your field of view

  • Uses all available photosites, maximizing image quality

  • Avoids aliasing and moiré issues

  • Processes more efficiently than traditional downsampling

Team 2 Film has an excellent video demonstrating this in the Video Village section below.

OpenAI Launches Deep Research: AI Agent for Intensive Research

OpenAI has introduced a new "Deep Research" mode for ChatGPT Pro subscribers, designed to conduct extensive web-based research and compile professional-quality reports across various domains. This AI agent, powered by the upcoming o3 model, aims to save users significant time and effort in complex research tasks.

Behind the Scenes

  • Available to ChatGPT Pro subscribers ($200/month) in the U.S., with plans to expand to other tiers

  • Utilizes OpenAI's O Series of reasoning models, specifically the full o3 model

  • Can analyze vast amounts of information, integrating text, PDFs, and images

  • Achieves a new high of 26.6% accuracy on the "Humanity's Last Exam" AI benchmark

  • Generates outputs resembling comprehensive, fully cited research papers

  • Applications span finance, science, policy, engineering, and consumer research

Final Take

The launch of Deep Research represents a significant step in AI's ability to conduct complex, multi-step research tasks. While still in its early stages, this technology has the potential to transform how professionals and organizations approach in-depth research and analysis across various fields.

Team 2 Films does a deep dive into the URSA Cine 12K and it’s RGBW sensor.

📀 The Beatles' "Now and Then," which utilized AI-assisted audio restoration, won the Grammy for Best Rock Performance, marking a milestone for technology-enhanced music production.

🎯 Adobe and Dentsu partner to create a comprehensive content supply chain solution for personalized marketing at scale.

🧐 The U.S. Copyright Office's recent report on AI-assisted works leaves uncertainty about the level of human creativity required for copyright protection in film and TV productions.

🏬 Disney's shuttered Star Wars: Galactic Starcruiser hotel is being repurposed as office space for Walt Disney Imagineering to support upcoming Walt Disney World expansion projects.

🏫 Florida State University's College of Motion Picture Arts acquires Governor's Square theater complex, providing students with essential facilities for virtual production, production design, and film screenings.

👔 Virtual Production Gigs

Technical Program Manager
Mo-Sys Engineering Ltd
London, UK

Internship - Virtual Production
Orbital Studios
Los Angeles, CA

📆 Upcoming Events

February 16 to 20
HPA Tech Retreat 2025
Rancho Mirage, CA

March 7 to 8
Cine Gear NY 2025
New York, NY

March 7 to 15
2025 SXSW Conference & Festivals
Austin, TX

April 6 to 9
NAB Show Las Vegas
Las Vegas, NV

April 9 to 10 🆕
Virtual Productions Gathering 2025
Breda, Netherlands

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate.