
Welcome to VP Land! AI is moving from the experiment phase to the production pipeline—with studio case studies, new VFX apps, and desktop supercomputers shipping this week.
In our last poll, most of you agreed that ETC’s uprez workflow for gen AI final pixel options still needs improvement. Check out today’s poll below.
In today's edition:
House of David ups AI use
Runway’s new apps
NVIDIA ships personal AI supercomputers

Veo 3.1 Brings More Control to AI Video

Google rolled out significant updates to its Flow AI video tool, introducing Veo 3.1 with enhanced audio capabilities and new editing features that give creators more granular control over their generated content.
The standout additions include:
More control - "Ingredients to Video" and First/Last frame are now added input options
Direct video editing in Flow - New "Insert" feature lets you add objects or characters to existing scenes, with Flow handling complex lighting and shadows to make additions look natural. Object removal is coming soon
Improved video quality - Veo 3.1 delivers better prompt adherence, enhanced realism, and more accurate textures compared to the previous version
Extended video length - The "Extend" feature can now create videos lasting a minute or more by seamlessly connecting to previous clips
Google reports over 275 million videos have been generated in Flow since its launch five months ago. The Veo 3.1 model is also available through Gemini API for developers and Vertex AI for enterprise customers.
Early examples are showing promise - PJ Ace on X demonstrated using Veo 3.1 with Nano Banana to create what they called "million-dollar looking ads" for brands, though the full workflow and quality benchmarks remain to be tested by the broader creative community.
SPONSOR MESSAGE
Descript is an easy to use text based video editor for podcasts and talking head videos.
But we've been using it as part of our longer video editing process with Resolve: paper edits.
Sorting through hours of interview footage and building out a radio edit is a pain.
With Descript, I edit videos as quickly as editing a document. I import my footage, get instant transcriptions, and search through hours of content in seconds.
When I hear something I like, I highlight the text and add it to my rough-cut composition. Then I build everything out, copy and paste soundbites like I would with a text document - but the video updates too.
I can also add temp VO, scene notes, and comments. The whole team can be on the same project, working on this in real-time.
When we're done, we export an XML from Descript and bring it into our NLE of choice.
Yes, most editing apps have added some transcription support, but IMO, none of them come close to the speed and ease of use in working with text in Descript.

House of David New Season Used AI in 253 Shots

House of David Season 2 used 253 AI-generated shots—more than triple the 73 shots in Season 1—according to production details revealed at AWS's Culver Cup showcase last week. Director Jon Erwin and Amazon MGM Studios' Chris del Conte demonstrated how the production developed what they call a "hybrid workflow" that seamlessly blends AI content with live-action photography and traditional VFX.
The key breakthrough was planning for AI integration from the start rather than using it as a backup solution:
Style transfers applied the show's visual aesthetic directly onto AI-generated assets, matching them closely enough to live-action footage that individual techniques became nearly impossible to distinguish
Virtual production environments were built by creating structural foundations in Unreal Engine within a week, then using AI for photorealistic details—compressing the typical 10-12 week timeline and $15,000-$200,000 cost for LED wall backgrounds
Multiple tool integration stacking Midjourney, Runway, Kling, Magnific, and Topaz with traditional VFX tools like After Effects and Unreal Engine, since no single AI tool produces broadcast-ready results alone
Editorial workflow generating 20 times more AI content than needed and letting editors select the best shots, similar to handling traditional footage
Runway Launches Task-Specific Apps

Runway has introduced Apps, a new approach to their AI tools, breaking down generative capabilities into specific, workflow-focused applications, according to their announcement. Instead of working with general-purpose AI models, users can now access task-specific tools designed for particular creative challenges.
Today we’re releasing another new batch of Runway Apps with a focus on VFX.
The first is the Change Weather app.
Now you can upload a video and simply describe what conditions you’d like to see.
Make a sunny day look overcast or bring torrential downpours.
(1/5)
— #Runway (#@runwayml)
7:18 PM • Oct 15, 2025
The Apps live inside what Runway calls "Generative Sessions" and cover both video and image generation tasks:
Video applications handle specific post-production needs like object removal, 4K upscaling, dialogue addition, and environmental changes (weather, lighting, time of day)
Image tools focus on product photography and style transformation workflows
Simple interface design - users select an app based on their specific task rather than figuring out how to prompt a general model
Best practices included - Runway provides guidance on input quality and prompting approaches for each application
The company plans to expand the Apps collection in the coming weeks, though no specifics were provided about which capabilities are next.
What matters: This represents a shift from the "learn our AI model" approach to "pick your creative task." For professionals juggling multiple projects, having purpose-built tools for common post-production needs could streamline workflows significantly - assuming the apps deliver consistent results at the quality level these tasks actually require.

NVIDIA Ships DGX Spark Desktop AI Supercomputer

NVIDIA has officially begun shipping DGX Spark, its desktop AI supercomputer, to researchers, developers, universities, and creators, according to the company's announcement on October 16.
Key developments:
Desktop form factor: Unlike traditional AI workstations requiring server rack infrastructure, DGX Spark is designed for desktop use, making high-powered AI workflows accessible to small studios and independent professionals
ComfyUI optimization: The popular open-source generative workflow toolkit announced full DGX Spark support with zero-setup installation, real-time preview features, and support for advanced node chaining—capabilities typically reserved for cloud-based systems
Local generation workflows: ComfyUI's integration enables near-instant local generative image and video workflows, reducing reliance on cloud render farming and transfer delays. All major stable-diffusion pipelines and most custom checkpoints are supported out of the box
Privacy-preserving production: For teams working with sensitive storyboard, casting, or preproduction materials, Spark's desktop approach allows complete in-house generation and archiving
Worth noting: This local-first approach contrasts sharply with the cloud-only AI tools dominating the market. For creative teams prioritizing speed, creative sovereignty, and custom pipeline integration, having this level of AI power available locally removes persistent workflow bottlenecks—especially for experiment-driven production work.

Corridor Crew recreated The Matrix’s iconic bullet-time shot using modern VFX workflows, including ComfyUI.

Stories, projects, and links that caught our attention from around the web:
📝 Netflix's Eyeline released the Vchain paper, introducing a new method for scalable video generation using chain-of-frame reasoning.
🚀 YouTube has become "home turf" for BBC Studios, according to the company, fueling rapid audience engagement and new creative strategies.
🤝 Nem Perez officially joins forces with Promise AI to launch The Generation Company, a new division focusing on VFX work.
🎵 Universal, Sony, Warner, Merlin, and Believe partner with Spotify to shape an “artist-first” approach to AI music production and distribution.
🏝️ The Canary Islands launch a homegrown VFX sector by combining cutting-edge technology, international talent, and aggressive tax incentives.

Addy and Joey dive into Infinity Fest’s panel on Gaussian Splats and AI in virtual production, exploring faster 3D workflows, Topaz's 16-bit EXR upscaling, and AI tools like Nano Banana and Seedream.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
Virtual Production (VP) Supervisor/Specialist - FT
Public Strategies
Oklahoma City, OK
Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events
October 24
Generative Media Conference
San Francisco, CA
April 18-22, 2026
NAB Show
Las Vegas, NV
July 15-18, 2026
AWE USA 2026
Long Beach, CA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.