
Welcome to VP Land! AI updates all summer. Before we built our rundown, two new OpenAI updates were announced: ChatGPT Agents (still figuring out how this is different from Operator) and image control improvements to GPT Image (looks like Flux Kontext is heating up the competition).
In our last poll, everyone agreed—the biggest challenge in creating AI-generated videos is maintaining visual continuity from shot to shot. Check out today’s poll below.
In today's edition:
Runway Act-Two dramatically upgrades motion capture capabilities
WeTransfer reverses controversial AI training terms
Absen builds world's largest LED virtual production volume
Lightricks LTXV generates long-form AI video

Runway Act-Two Levels Up AI Performance Capture

Runway launched Act-Two, a major upgrade to their motion capture model that delivers enhanced fidelity and comprehensive tracking for head, face, body, and hand movements. The company announced the release is now available to all users.
Act-Two requires only a driving performance video and reference character image to animate any digital character, eliminating the need for expensive motion capture hardware or specialized studios.
The model supports diverse character types and art styles, from photorealistic humans to stylized cartoon characters, while maintaining performance fidelity across different environments and creative directions.
Enhanced tracking capabilities now include fine facial expressions and finger gestures, plus automatic environmental motion that makes characters appear more naturalistic without additional editing.
Pricing is set at 5 credits per second of generated animation with clips running up to 30 seconds at 24 fps, making it accessible for rapid prototyping and iterative workflows.
The upgrade addresses key limitations from Act-One by improving motion consistency and expanding the range of trackable movements, positioning it as a more comprehensive solution for animation studios.
SPONSOR MESSAGE
Atomos Sumo: The Ultimate Monitor-Recorder

Experience the ultimate on-set solution with the Atomos Sumo 19SE – now available at an incredible value.
This production powerhouse delivers professional-grade monitoring, recording, and switching capabilities that would typically cost thousands more, all in one robust package that will transform your workflow without breaking the bank.
Premium Display Quality: Features a daylight-viewable 19-inch 1920x1080 touchscreen with stunning 1200-nit brightness and 10-bit color depth, covering the DCI-P3 color gamut for perfect visibility even in direct sunlight.
Professional Recording Capabilities: Capture content in multiple high-end formats, including ProRes RAW (up to 6Kp30), ProRes 422, and DNxHR up to 4K60p, with multi-channel recording for complete production flexibility.
Complete Connectivity: Equipped with four 12G-SDI inputs, HDMI 2.0 connectivity, signal cross-conversion, and professional audio support including dual XLR inputs with +48V phantom power.
Advanced Production Tools: Access essential monitoring features including waveform, vectorscope, false color, focus peaking, anamorphic de-squeeze, and comprehensive LUT support for both SDR and HDR workflows – tools typically found only in much more expensive systems.
Trusted Across The Industry: Used on hundreds of productions, from the latest season of The Last of Us to capturing hundreds of camera feeds for MrBeast videos.
Check out the Sumo - Only 1995 USD/EUR

WeTransfer Drops AI Clause After Backlash

WeTransfer quickly reversed controversial terms of service updates that suggested user content could train AI models after facing intense backlash from creative professionals. The Dutch file-sharing company removed all AI references from its updated policies and now explicitly states it will not use uploaded content for machine learning purposes.
The original clause granted WeTransfer rights to use content for "improving performance of machine learning models" as part of content moderation, sparking fears among users that their intellectual property could feed AI training datasets.
Creative professionals including filmmakers, artists, and photographers led the outcry on social media, with many threatening to cancel subscriptions over concerns their work would be used without permission to develop competing AI tools.
WeTransfer's swift response included revising the terms to state content is used solely "for the purposes of operating, developing, and improving the Service" with no AI training whatsoever, addressing user privacy and intellectual property concerns.
The controversy reflects broader industry tensions as creative professionals increasingly worry about how their work feeds into AI development, following similar backlash at companies like Adobe and Meta over data usage policies.
The Writers' Guild of Great Britain welcomed the clarification, emphasizing that "members' work should never be used to train AI systems without their permission" and highlighting the importance of explicit consent in creative industries.
Absen & Versatile Open World's Largest LED Volume

Absen has partnered with Versatile to create the world's largest LED virtual production volume in Deqing, China, breaking the single-screen pixel record for the industry. This massive installation sets a new benchmark for immersive filmmaking and in-camera visual effects.
The volume has a diameter of 50 meters and a height of 12 meters. The total floor area of the stage reaches 5,000 square meters, with the LED display covering approximately 1,700 square meters—equivalent to four basketball courts.
Productions can achieve up to 160° viewing angles without color shifts, allowing cameras and actors to move freely around the set without technical limitations.
The modular design allows studios to reconfigure or expand the volume for different scenes and projects, maximizing flexibility for various production needs.
Energy efficiency improvements deliver up to 40% less power consumption compared to comparable LED systems, reducing operational costs for extended shooting schedules.

Lightricks LTXV Breaks 60-Second AI Video Barrier

Lightricks has released an updated version of their LTXV model that generates AI videos longer than 60 seconds for the first time, breaking through the industry's previous 8-second limit. The company positions itself as the first to enable long-form AI video creation at scale.
60-second AI video generation just got unlocked!
LTXV is the first model to generate native long-form video, with controllability that beats every open source model.
- 8× longer than typical gen video
- 10–100× faster & cheaper
- Runs even on consumer GPUs
- Pose, depth &— #LTX Video (#@LTX_Video)
12:49 PM • Jul 16, 2025
The model runs 30 times faster than comparable solutions while maintaining quality, thanks to its 13-billion parameter architecture that works on standard consumer GPUs rather than requiring expensive hardware.
LTXV introduces Continuous Control LoRAs that allow creators to input signals for pose, depth, and visual style throughout the entire video generation process, not just at the beginning like previous models.
The real-time streaming capability means creators can watch their video generate frame by frame while making live adjustments and refinements, reducing the typical trial-and-error workflow of AI video creation.
Unlike proprietary competitors like OpenAI's Sora or Runway's Gen-4, LTXV offers open-source weights and codebase on platforms like Hugging Face, enabling developers to experiment and customize the model for specific use cases.
The technology supports both text-to-video and image-to-video workflows, giving creators flexible entry points for different types of projects from advertising to game development.

Watch this miniature remake of the iconic Fast & Furious final race—Supra vs. Charger—recreated in 1:24 scale as a tribute to the original 2001 film.

Stories, projects, and links that caught our attention from around the web:
🗣️ Deepgram has launched Saga, a Voice OS specifically for developers, aiming to simplify the integration of voice technology into applications.
📲 Freepik has launched its first-ever mobile app, bringing AI-powered creative tools to iOS and Android devices.
⚡ Luma Labs has announced that the new Ray2 Flash model is now available for Modify Video and via the Modify Video API.
🎥 Aiarty has launched the Aiarty Video Enhancer, an AI-powered platform designed to clean up and restore imperfect film footage.
📷 Insta360 has introduced lens filters and a larger battery for its 360-degree action camera to enhance image quality and extend shooting time.

This week on Denoised, Addy and Joey break down how ComfyUI unlocks the power of AI image generation—from diffusion models and latent space to advanced workflows with LoRAs and image-to-image tools.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
🆕 Architect (Rhino/Grasshopper/Revit/Blender)
Runway
Remote
🆕 VFX Artist (Runway/Flame/Nuke)
Runway
UK
AR/VR Intern
LumeXR
Kerala, India
Virtual Production Intern
Orbital Studios
Los Angeles, California

📆 Upcoming Events
🆕 July 30
RED Cinema Broadcast Demo Day at Trilogy Studios
Forth Worth, TX
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.


Source: Sam Sykes on X
Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.