
Welcome to VP Land! This week - partnerships, funding, and lots of custom model training as a service.
In our last poll, we asked which tool caught your interest most — Wan 2.2’s motion control, KREA’s Realtime model, or the AE AI plugin. KREA’s Realtime Video model emerged as the clear favorite. Check out today’s poll below.
In today's edition:
EA and Stability AI partner up
Runway lets users train custom models
Wonder Studios scores $12M for AI films
BTS of a robotic arm VFX shot

Stability AI Partners With EA on Game Dev Tools

Stability AI and Electronic Arts announced a strategic partnership to co-develop generative AI models, tools, and workflows specifically for EA's game development teams—positioning this as a technical collaboration rather than a typical vendor licensing deal.
Embedded research team - Stability AI's 3D research team will work directly alongside EA's artists and developers to build custom AI systems tailored to game production workflows, suggesting a deeper integration than off-the-shelf tool adoption
PBR material acceleration - First joint initiative focuses on accelerating Physically Based Rendering material creation through artist-driven workflows, including generating 2D textures that maintain exact color and light accuracy across different environments
3D pre-visualization - Partnership will pursue AI systems capable of pre-visualizing entire 3D environments from intentional prompts, allowing artists to creatively direct game content generation with more speed and precision
Stability AI's 3D credentials - Stable Fast 3D, TripoSR, and Stable Point Aware 3D rank among the ten most-liked image-to-3D models on Hugging Face; Stable Zero123 is the most-liked text-to-3D model
The partnership arrives as game studios face mounting pressure to control production costs and timelines while maintaining AAA quality expectations. EA positioned this around "amplifying creativity" for existing teams rather than replacing artists—though whether custom-built AI tools actually accelerate workflows at EA's scale remains to be tested in shipped titles.
SPONSOR MESSAGE
Find out why 100K+ engineers read The Code twice a week
Staying behind on tech trends can be a career killer.
But let’s face it, no one has hours to spare every week trying to stay updated.
That’s why over 100,000 engineers at companies like Google, Meta, and Apple read The Code twice a week.
Here’s why it works:
No fluff, just signal – Learn the most important tech news delivered in just two short emails.
Supercharge your skills – Get access to top research papers and resources that give you an edge in the industry.
See the future first – Discover what’s next before it hits the mainstream, so you can lead, not follow.

Train Custom AI Models on Runway

Runway is introducing Model Fine-tuning, a new self-serve feature that will allow users to customize its generative models using their own datasets. The feature aims to solve the problem of general-purpose models failing to meet specific customer needs.
Key Details:
Custom training - Users will be able to fine-tune models for their specific use cases, aesthetics, or motion dynamics, moving beyond the "opinionated" nature of current general models.
Enterprise focus - Runway is highlighting applications for industries including robotics, life sciences, education, architecture, and design, suggesting a strong push toward specialized commercial use.
Pilot program access - The feature is available now for select pilot partners. A general release is "coming soon," and interested users can submit for early access.
Accessible post-training - Runway states its goal is to "make post-training accessible to everyone," allowing users to customize models with "minimal compute and data requirements."
This move positions Runway to support more specialized enterprise workflows, addressing what it identifies as a gap where general models "can fall short when deployed for real customer problems."
Wonder Studios Raises $12M for AI Film Production

Wonder Studios raised $12 million in seed funding to scale its three-pillar model combining commercial production, IP partnerships, and original content—all built around a community of AI-native creators. VP Land sat down with CEO Xavier Collins and product designer JD LeRoy to discuss their community-driven approach to AI filmmaking — and how they’re empowering creators to realize projects that once seemed impossible.
The London-based studio closed the round led by Atomico, with backing from Adobe Ventures, Hollywood veterans like Stephen Lambert (Studio Lambert) and Erik Huggers (former Veo CEO), plus AI pioneers including Freepik CEO Joaquín Cuenca Abela and ElevenLabs co-founder Mati Staniszewski.
The business model - Wonder operates as an AI-native studio with three revenue streams: high-profile commercial work for brands and artists, IP partnerships with content creators, and original content production
Already shipping - Since launching in April 2025, Wonder generated significant revenue through projects like an AI-powered music video for Lewis Capaldi's Something in the Heavens (created with Google DeepMind, YouTube, and Universal Music Group) and the Beyond the Loop AI anthology series.
Community platform - The Wonder app serves as a hub connecting AI-native creators with career opportunities, collaborators, and resources; Wonder curates this community and produces content that unlocks the value of their IP

Behind AI-Generated Robotic Arm Video

Creator enigmatic_e posted a BTS breakdown showing how three AI video generation tools created the illusion of a robotic arm manipulating a jar—demonstrating how accessible open-source models and workflow tools have become for solo creators.
Behind the scenes of my robotic arm jar video 🎬✨
— #enigmatic_e (#@8bit_e)
4:47 PM • Oct 22, 2025
Wan 2.2 - Used open-source AI animation model for most shots with first and last frame generation technique, allowing the creator to define start and end points while the AI generates the motion between them
ComfyUI - Provided the workflow backbone, with a points editor node that masks specific areas without manual rotoscoping, isolating just the areas to generate while blending seamlessly with original footage
Veo 3.1 - Google's model handled specific shots where it delivered better results for the reference image matching, including perfect lighting consistency with the original footage
Masking workflow - Points editor in ComfyUI allowed quick area isolation for generation, getting "most of the way there" without precision manual work, though manual rotoscoping remains an option for tighter control
Full frame generation - Final sauce explosion shot dropped masking entirely so the AI effect could interact with the creator's body and background, not just the isolated jar area

Two Minute Papers dives deep into Google’s Veo 3, which grasps complex concepts like color mixing, physics, and realistic motion.

Stories, projects, and links that caught our attention from around the web:
⚡ Runway launched Workflows, chaining multiple AI models into custom node-based pipelines to eliminate manual exports between tools
📢 Runway launched Apps for Advertising, a collection of AI tools that generate product shots and ad mockups without traditional shoots
💡Volinga launches Plugin Pro with mesh-based relighting to solve 3DGS dynamic lighting challenges for LED volumes
👥 ChatGPT Shared Projects expands to Free, Plus, and Pro users for team collaboration with shared chats, files, and instructions
🎬 fal adds Seedance 1.0 Pro Fast, delivering cinematic-quality AI video 3x faster and 60% cheaper than the standard Pro tier

Addy and Joey unpack Adobe’s major AI moves with its new Foundry service and Invoke acquisition, Amazon’s House of David using AI for 253 shots to cut months off production, Runway’s pivot to industrial applications, and insights from LA Tech Week on evolving AI production strategies.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
Virtual Production (VP) Supervisor/Specialist - FT
Public Strategies
Oklahoma City, OK
Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events
April 18-22, 2026
NAB Show
Las Vegas, NV
July 15-18, 2026
AWE USA 2026
Long Beach, CA
View the full event calendar and submit your own events here.

We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
— #Merriam-Webster (#@MerriamWebster)
1:21 PM • Sep 26, 2025
Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.



