
Welcome to VP Land! We got an exclusive look at some of the new tech Vu has been developing. They're basically building an AI-powered holodeck. And a way to run Unreal Engine with a few buttons from a kiosk. Check out the breakdown below.
Exactly three years ago today, Stable Diffusion released its first text-to-image model. Imagine where things will be three years from now 🤯
In our last poll, most of you agreed that you’d use portable cinema robots primarily for creative shots on location. Check out today's poll below.
In today's edition:
Vu's New Tech
Qwen-Image-Edit open-sources powerful image editing
New Runway updates
Run and gun dragon VFX shoot

Vu's AI-Powered LED Walls

Vu gave us an exclusive first look at new tech that could reshape how you create content, built around their vision of "content at the speed of thought." The Tampa-based company is rethinking content creation for an AI-driven future where you talk and point to a giant display in real-time.
Image-Based Lighting automatically syncs all your studio lights to match whatever background environment appears on the LED wall, helping match the studio lighting to the virtual environment.
A streamlined Unreal Engine integration lets you load and navigate 3D scenes directly through a simple touch screen interface on Vu Studio. Plus, they're planning a marketplace for ready-to-use environments.
You can now generate and modify images by voice, asking the wall to "create an office scene with bright window light and a red chair" or control the Vu Studio system directly, such as adjusting exposure by simply saying "go down one stop."
Two new solutions bring Vu's software to existing setups: Vu Core is a new media server in a box that works with any display, while a browser-only version lets you cast to smart TVs without additional equipment (resolution and external connections are limited on the browser-only version).
They're also expanding into other industries, bringing their Vu One Mini into the boardroom, with team collaboration features like personalized dashboards and a collaborative brainstorming tool.
SPONSOR MESSAGE
Atomos Sumo: The Ultimate Monitor-Recorder

Experience the ultimate on-set solution with the Atomos Sumo 19SE – now available at an incredible value.
This production powerhouse delivers professional-grade monitoring, recording, and switching capabilities that would typically cost thousands more, all in one robust package that will transform your workflow without breaking the bank.
Premium Display Quality: Features a daylight-viewable 19-inch 1920x1080 touchscreen with stunning 1200-nit brightness and 10-bit color depth, covering the DCI-P3 color gamut for perfect visibility even in direct sunlight.
Professional Recording Capabilities: Capture content in multiple high-end formats, including ProRes RAW (up to 6Kp30), ProRes 422, and DNxHR up to 4K60p, with multi-channel recording for complete production flexibility.
Complete Connectivity: Equipped with four 12G-SDI inputs, HDMI 2.0 connectivity, signal cross-conversion, and professional audio support including dual XLR inputs with +48V phantom power.
Advanced Production Tools: Access essential monitoring features including waveform, vectorscope, false color, focus peaking, anamorphic de-squeeze, and comprehensive LUT support for both SDR and HDR workflows – tools typically found only in much more expensive systems.
Trusted Across The Industry: Used on hundreds of productions, from the latest season of The Last of Us to capturing hundreds of camera feeds for MrBeast videos.
Check out the Sumo - Only 1995 USD/EUR

Qwen-Image-Edit - The Open-Source 'Kontext'

Alibaba's Qwen team launched Qwen-Image-Edit, a new AI-powered image editing system. The tool offers image modification capabilities similar to Flux Kontext, but it can run locally on your machine.
🚀 Excited to introduce Qwen-Image-Edit!
Built on 20B Qwen-Image, it brings precise bilingual text editing (Chinese & English) while preserving style, and supports both semantic and appearance-level editing.✨ Key Features
✅ Accurate text editing with bilingual support
✅— #Qwen (#@Alibaba_Qwen)
5:51 PM • Aug 18, 2025
The system uses a 20-billion-parameter model that handles both high-level edits like object removal and low-level pixel adjustments like background changes.
You can edit text within images in both Chinese and English while preserving the original font style, color, and scale.
Qwen-Image-Edit achieved state-of-the-art results on standard editing benchmarks, outperforming previous models, especially for detailed, multi-step edits.
The tool accepts natural language instructions and supports common file formats like JPG, PNG, and WebP for quick integration into existing workflows.

Runway Adds Veo 3, Game World Opens Up

Runway dropped a grab bag of updates this week.
They're opening up their platform to select third-party models, starting with Google's Veo 3
Gen-4 Image Turbo is now available to all web users, generating images with reference photos in 10 seconds or less at 2.5-4x lower cost while maintaining 93.3% quality compared to standard Gen-4.
Act-Two now includes Voices, giving you control over how your performance sounds to better match your generated character.
The new Game Worlds, which was teased a few months ago, is now in open beta. It creates real-time, personalized interactive experiences where every session generates unique stories, characters, and media as you play.

Compositing Academy breaks down a real-world workflow to shoot a dragon VFX shot in Iceland on a tiny budget.

Stories, projects, and links that caught our attention from around the web:
🎬 Taylor Sheridan and Paramount will launch a massive film studio in Texas to expand regional production capacity.
🇬🇧 Marvel is shifting much of its film production from Georgia to London.
🧟♂️ Guillermo Del Toro says he doesn't want AI, just "old-fashioned craftsmenship" on his upcoming Frankenstein.
🗣️ Google has introduced a new feature allowing its Gemini AI to read Google Docs aloud.
🌀 Google has launched a Flow-dedicated X channel to share product updates, best practices, filmmaker stories, and videos created using Flow.
🧑🎤 Nvidia’s RTX tech gives the latest Indiana Jones game a more realistic digital Harrison Ford by simulating hair with thousands of virtual spheres.

In the latest episode of Denoised, we look at Hollywood’s Quibi-inspired MicroCo slashing costs with AI and the shifting production landscape across Texas, Georgia, LA, and London.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
Architect (Rhino/Grasshopper/Revit/Blender)
Runway
Remote
VFX Artist (Runway/Flame/Nuke)
Runway
UK
Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events
August 23 to 25
Runway’s Gen:48 Aleph Edition
Remote
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.