Powered by

Welcome to VP Land! Runway's Act-Two isn't the only human performance-driven AI animator anymore - Wan launched yet another free, open-source model: Wan2.2-Animate

In the last poll, most of you agreed the AI-animated feature Critterz cutting costs by 80% could reshape studio production models. Check out today’s poll below.

In today's edition:

  • Luma AI unveils Ray3 for studio-grade HDR video

  • Television Academy releases comprehensive AI guidelines

  • Meta launches Ray-Ban Display smart glasses with integrated AI

  • ComfyUI introduces Comfy Cloud for browser-based AI workflows

Luma's Ray3 Brings Studio HDR to AI Video

Luma AI launched Ray3, the first generative video model that creates studio-grade HDR content natively. The model generates vivid, high dynamic range videos from text prompts and exports as 16-bit EXR files for professional workflows.

  • Ray3 is the world's first reasoning video model that understands both visual and conceptual prompts to plan coherent scenes with consistent characters and natural motion.

  • The model outputs native HDR video with enhanced color depth and contrast, making it viable for advertising, filmmaking, and professional digital media production.

  • New Draft Mode enables rapid prototyping with annotation tools that let you draw directly on frames to influence motion and design without additional prompting.

  • The model generates 10-second clips at 1080p resolution with state-of-the-art physics simulation for realistic object interactions and movement.

SPONSOR MESSAGE

Eddie AI: Edit a Rough Cut in 15 Minutes

Eddie AI is a professional-grade, AI-powered video editing assistant built for creators, editors, and teams who want to streamline their workflow without sacrificing quality.

It automates tedious tasks like cutting interviews, scripting content, logging A/B rolls, organizing assets, handling multicam podcast edits, and generating social clips. Users can even interact with transcripts through prompts to quickly find answers within their footage.

New features this week:

  • Agentic story development - Feed Eddie a URL in Rough Cut mode, and it automatically pulls key messages, brand positioning, and background context into the edit—so early cuts are aligned with client narratives from the start. (Treatment input coming soon!)

  • Extended rough cuts - Rough cut limit now stretches to 40 minutes, giving YouTubers, documentarians, and webinar creators room to edit long-form projects in one pass without external workarounds.

  • Smarter B-roll logging - Eddie now analyzes both visuals and dialogue in B-roll, making background footage searchable by what’s seen and said—surfacing contextual moments that often get missed.

Ready to see how much faster your edits can be? Try Eddie AI today and experience a smoother, smarter workflow that keeps you focused on the story, not the slog.

Television Academy Releases AI Guidelines

The Television Academy released the industry's most comprehensive AI guidelines, built around three core principles that more than 30,000 members helped shape. The new standards launch after the Primetime Emmys and aim to balance creative innovation with ethical safeguards.

  • Creative Integrity requires that AI tools support rather than replace human creative vision, ensuring original storytelling remains at the center of production decisions.

  • Permissions and Legal Viability mandates that creators secure proper rights, licenses, and permissions before using AI-generated content, especially for commercial submissions.

  • Accountability and Transparency demands clear documentation of how AI tools are used, who's responsible for outputs, and sustainable integration practices that consider environmental impact.

The guidelines emerged from an AI Task Force chaired by producer Eric Shamlin, with significant input from Christina Lee Storm and other industry leaders. Unlike previous industry responses focused on restrictions, these guidelines provide a framework for responsible adoption while protecting creator rights and maintaining quality standards.

Meta Puts Displays in Ray-Ban Frames

Meta just launched the first consumer smart glasses with integrated AI and a built-in display. The Ray-Ban Display glasses deliver what Google Glass promised over a decade ago - a functional screen on your face that actually works.

  • The glasses feature a 600x600 pixel display in the right lens that shows real-time captions, live translations, and contextual information overlays.

  • You control everything through a neural wristband that reads muscle signals from your wrist, letting you navigate menus and apps without touching anything.

  • The $799 price includes a 12MP camera, six-hour battery life, and the ability to capture content hands-free while accessing Meta AI.

  • The glasses launch September 30th at select US retailers with in-person fitting required, but you'll still need your phone nearby since they're designed as a companion device rather than a replacement.

AI Tool Updates You Should Know

This week saw new AI tools and major updates roll out.

  • VEED launched Fabric 1.0, an AI tool that turns any single image into a talking video using just audio or text input. The browser-based platform generates up to one-minute videos in seconds, making it 60x cheaper and 7x faster than traditional video production.

  • ComfyUI just launched Comfy Cloud, bringing its powerful node-based AI workflow tools directly to your browser. This new platform eliminates the need for local installation and complex setup.

  • ElevenLabs launched Studio 3.0, a browser-based editor that brings all their AI tools into one timeline workflow: AI voice synthesis, music generation,SFX, video editing.

  • Topaz Labs rolled out major updates to its AI-powered photo editing suite, introducing cloud processing and new generative upscaling models.

  • Dzine has launched its AI Video Editor that lets you control color, style, and lighting in your footage using simple text prompts and image references.

In our Inside the AI Studio series, we sat down with Phantom X co-founder Kavan The Kid at AI on the Lot to break down Echo Hunter—the first SAG-approved AI film—where he explains how his team trained custom models on actors, captured performances with Runway Act-One, and built a full pipeline from concept to execution.

Stories, projects, and links that caught our attention from around the web:

🛠️ Fab is now seamlessly integrated into the Epic Games Launcher, giving creators direct access to the massive marketplace of 3D assets, materials, and tools without switching platforms.

🎮 DJI announced the DJI Mini 5 Pro, which can shoot 10-bit 4K footage up to 120 fps.  

📸 Legendary cinematographer Roger Deakins shares never-before-seen storyboards, sketches, and photos from his classic films in his new book Reflections: On Cinematography.

🤖 Vu released a new whitepaper, The Future of AI Content Production, examining how artificial intelligence is transforming every stage of film and media creation.

Joey and Addy dive into Seedream 4.0, Comfy Cloud, Stable Audio 2.5, plus new gear from RED and Nikon, the Vimeo acquisition, and more hot AI news this week.

Read the show notes or watch the full episode.

Watch/Listen & Subscribe

📆 Upcoming Events

September 23 to 24
CFX 2025
Chattanooga, TN

October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate

Keep Reading

No posts found