Welcome to VP Land! The legal landscape for generative AI just saw a significant development, as two major entertainment giants, Disney and Universal, have jointly sued AI image generator Midjourney. They allege the company's AI was trained using their copyrighted characters without authorization, leading to widespread infringement.
This landmark case marks a critical escalation in Hollywood's stance against AI platforms. The outcome could profoundly shape how AI models can legally acquire and utilize training data for creative outputs across the entire entertainment industry.
In today's edition:
Disney and Universal sue Midjourney over copyright infringement
Runway introduces conversational Chat Mode for Gen-4 AI
Zhejiang University develops FreeTimeGS for 4D video Gaussian splats
New AI updates push digital character performance forward
Runway has introduced Chat Mode, a conversational interface that lets creators generate images and videos using natural language prompts through their Gen-4 AI technology.
Conversational interface replaces traditional prompt engineering with natural dialogue, allowing creators to describe their vision and refine outputs through back-and-forth conversation with the AI system.
Gen-4 technology powers the feature with improved consistency in artistic styles, logical scene progression, and realistic lighting conditions compared to previous versions that sometimes produced visual artifacts.
The platform maintains its traditional streamlined interface alongside Chat Mode, giving users flexibility to choose between conversational creation and conventional prompt-based workflows.
Additional features like lip sync capabilities allow creators to synchronize audio with generated video content, enhancing the realism and professional quality of AI-generated materials.
Introducing Chat Mode. A new way to create with Gen-4 Images, Videos and References. Now you can generate anything you want, all from within a single conversational interface. Available for all users.
— Runway (@runwayml)
5:23 PM • Jun 12, 2025
The feature launches for all users.
SPONSOR MESSAGE
The Blackmagic PYXIS 12K delivers flagship cinema quality at an unprecedented $5,495 price point, featuring the same revolutionary 12K RGBW sensor found in high-end URSA Cine cameras.
This compact powerhouse shoots full-frame 12K footage with 16 stops of dynamic range—matching ARRI and RED performance levels while maintaining the modular design that made the PYXIS 6K so popular with professionals.
Key advantages that set PYXIS 12K apart:
No-crop flexibility: Shoot 4K, 8K, or 12K using the full sensor area without changing field of view—eliminating the typical resolution compromises
High-speed performance: Record up to 112fps in 8K and 60fps in 12K, with improved sensor response times reducing global shutter needs
Professional connectivity: Dual CFExpress slots, 10Gb Ethernet, and USB-C recording enable modern cloud workflows and high-speed transfers
Anamorphic ready: True 6:5 anamorphic shooting without cropping, plus multiple aspect ratio options for maximum creative control
Post-production friendly: Blackmagic RAW codec makes massive 12K files manageable while preserving full color and exposure control
The PYXIS 12K represents a fundamental shift in what's possible under $5,500, giving you the resolution flexibility and image quality previously reserved for cameras costing five times more.
Researchers at Zhejiang University have developed FreeTimeGS, a new technology that allows Gaussian splats to play back video with smooth motion in 4D space. This breakthrough lets creators capture and replay moving 3D scenes that change over time, opening new possibilities for virtual production and immersive video experiences.
The technology uses tiny fuzzy points called Gaussian primitives that can appear and move anywhere in space and time, creating smooth animations of complex scenes without being stuck in fixed positions.
20 cameras required for high-quality results, contrary to some online claims suggesting fewer cameras work—this hardware requirement makes FreeTimeGS a professional-grade tool rather than a consumer solution.
Motion functions control how each Gaussian moves over time, allowing the system to handle complicated movements and special effects that older 3D reconstruction methods struggled with.
The system learns to arrange and move Gaussians by comparing results to real videos, automatically adjusting to make scenes look as realistic as possible.
This represents a major step forward for using Gaussian splats in professional workflows where motion matters. The ability to replay video content in 4D space could transform how creators approach virtual sets, VR experiences, and any project requiring dynamic 3D scenes.
Disney and Universal filed a major lawsuit against AI image generator Midjourney, alleging the company willfully infringed on their copyrighted characters by training its AI on unauthorized content and enabling commercial distribution of derivative works. The 110-page complaint targets Midjourney's subscription-based service and upcoming video generation platform.
The lawsuit alleges Midjourney trained its AI models on unauthorized copies of protected characters including Darth Vader, Elsa, Iron Man, Shrek, and the Minions, then allowed users to generate derivative works of these famous properties.
Disney and Universal's complaint calls Midjourney "the quintessential copyright free-rider and a bottomless pit of plagiarism," arguing the platform functions as a virtual vending machine generating endless unauthorized copies of their copyrighted works.
The studios presented visual evidence showing AI-generated images that closely resemble their iconic characters, claiming this proves Midjourney's business model fundamentally relies on unauthorized exploitation of protected content.
Beyond image generation, the lawsuit targets Midjourney's commercial distribution through its subscription service and newly announced video generation, potentially amplifying copyright concerns across moving media.
This case represents a significant escalation in Hollywood's fight against AI companies, with potential implications for how generative AI models can legally train on and reproduce copyrighted material across the entertainment industry.
The outcome could fundamentally reshape AI development practices, potentially requiring explicit licensing for training data or reinforcing fair use protections that accelerate AI adoption. Legal precedents from this landmark case will likely influence ongoing copyright battles between traditional content creators and AI technology providers.
Three new AI updates are pushing digital character performance forward, with improvements spanning avatar generation, voice synthesis, and real-time interaction. HeyGen's Avatar IV now creates talking avatars from single photos with hand gestures, while ElevenLabs and Mirage Studio have each rolled out their own enhancements.
HeyGen Avatar IV transforms any single photo into a talking avatar that synchronizes voice with facial expressions and hand gestures, eliminating the need for cameras or motion capture equipment while supporting styles from realistic humans to anime characters.
ElevenLabs released version 3 of their voice synthesis technology, which improves the emotional range and naturalness of AI-generated speech to create more convincing audio for digital characters and storytelling applications.
Mirage Studio showcased advances in real-time avatar motion and interaction technology, enabling more responsive and immersive character behaviors during live digital experiences.
These parallel developments in avatar creation, voice synthesis, and motion technology point to a broader shift toward more lifelike digital characters. The combination of easier creation tools and more expressive AI-generated performances could make professional-quality character work accessible to creators without specialized technical skills.
YouTube channel Virtual Production Insider reimagines Severance using Unreal Engine.
Stories, projects, and links that caught our attention from around the web:
🌄 Adobe Stock is now nearly half comprised of AI-generated images, totaling over 313 million, a significant shift from just 8.5 million in 2023.
🩸 A new deleted scene from Ryan Coogler's Sinners has been released, showcasing a musical sequence featuring Delroy Lindo in a visually striking split diopter shot.
💥 AMC Networks has joined forces with Runway to harness AI technology for more efficient marketing and TV development, focusing on pre-visualization and special effects ideation.
🎞️ Optik Oldschool has launched OptiColour 200, a new color film featuring natural colors, good contrast, and an orange base, which makes scanning easier, available in both 35mm and medium format.
Addy and Joey cover WWDC highlights—Liquid Glass UI, Genmojis, and Apple Intelligence. Plus: 4D splats, AI memory in MP4s, and faster image models.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe
Virtual Production Instructional Specialist
College of Motion Picture Arts - Florida State University
Tallahassee, FL
AR/VR Intern
LumeXR
Kerala, India
Virtual Production Intern
Orbital Studios
Los Angeles, California
July 8 to 11
AI for Good Film Festival 2025
Geneva, Switzerland
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.
Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.
Reply