Just arrived in Vegas for NAB. Sorry for the late send - what should have been a four hour drive turned into seven.
Even though NAB starts Sunday, we’re filming interesting interviews and tech breakdowns starting tomorrow, so make sure you’re subscribed to our YouTube channel.
Bunch of NAB updates in this issue (we also have a breakdown on Denoised podcast). But that’s not all the news - lots of interesting AI updates with some new, powerful video models (and Midjourney released an alpha of version 7).
Joey
Blackmagic just wrapped up their mammoth 2.5 hour NAB livestream. We’re still digging through it, but the main highlights are a 12K PYXIS camera (leaked yesterday), new high-end ATEM mini, Resolve 20 with local AI voice modification and script sync, and autofocus! (only on the Cinema 6K Camera…for now)
Strada announced Strada Agents, a new streaming technology that enables remote access to media files without cloud storage requirements.
Strada Agents allows instant access to media stored on Mac computers, external drives, NAS devices, or shared storage solutions like Avid NEXIS and LucidLink
The technology eliminates cloud storage costs while enabling remote team collaboration
Adaptive encoding ensures smooth playback regardless of bandwidth limitations
Currently available for macOS via Open Beta, with Windows, Linux, and mobile versions in development
Additional features including sharing, commenting, and file download capabilities are planned for future releases
Adobe has launched major AI-powered updates to Premiere Pro, including Generative Extend in 4K, Media Intelligence for rapid footage search, and automated Caption Translation across 27 languages..
Generative Extend, now generally available, uses Adobe's Firefly Video Model to automatically lengthen video and audio clips in 4K and vertical formats, helping editors cover gaps in footage without requiring reshoots
Media Intelligence uses AI to identify and search content within footage based on objects, locations, camera angles, and metadata, dramatically reducing time spent looking for specific clips
Caption Translation instantly generates multilingual subtitles, while new Color Management automatically transforms log footage to HDR/SDR for immediate high-quality editing
After Effects received upgrades including high-performance preview playback, enhanced 3D tools, and HDR monitoring capabilities
Frame.io V4 now offers expanded storage, document review tools, and automated transcription services, centralizing the entire media workflow
Celtx has launched a Screenplay Plugin for Adobe Premiere Pro that creates the first direct integration between a screenwriting tool and a non-linear editor, enabling editors to automatically generate project frameworks from scripts and expedite rough assembly cuts without leaving Premiere Pro.
SPONSOR MESSAGE
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
Former Snap AI head Alex Mashrabov has launched Higgsfield AI, a new generative video platform that brings sophisticated camera movements to AI-generated content without specialized equipment. The platform addresses a critical gap in current AI video tools by focusing on cinematic language and camera techniques rather than just visual quality, enabling creators to produce more professional, story-driven content.
Higgsfield's control engine allows users to implement complex camera movements like dolly-ins, crash zooms, and overhead sweeps using just a single image and text prompt
The platform specifically targets serialized short-form dramas for platforms like TikTok and YouTube Shorts, a market projected to reach $24 billion by 2032
Filmmaker Jason Zada demonstrated the technology with Night Out, showcasing fluid camera motion generated entirely through Higgsfield's interface
Unlike competitors Runway, Pika Labs, and OpenAI who focus on visual fidelity, Higgsfield emphasizes storytelling through movement and perspective
Read more to learn how Higgsfield's technology maintains character and scene consistency over longer sequences.
MoCha, a joint effort by Meta and the University of Waterloo, represents a significant advancement in AI-generated character animation, enabling full-body character animations directly from speech and text inputs.
Unlike previous systems limited to "talking heads," this new technology creates complete character performances with synchronized speech, expressions, and body movements. The innovation allows for movie-grade animations, multi-character conversations, and dynamic environments without requiring additional control signals like reference images or skeletons.
🚀Thrilled to introduce ☕️MoCha: Towards Movie-Grade Talking Character Synthesis
Please unmute to hear the demo audio.
✨We defined a novel task: Talking Characters, which aims to generate character animations directly from Natural Language and Speech input.
✨We propose
— Cong Wei (@CongWei1230)
1:12 AM • Apr 1, 2025
MoCha introduces a "speech-video window attention mechanism" that precisely aligns speech with video elements, creating more realistic lip synchronization and speech-video coherence
The system uses a joint training strategy combining both speech and text data, overcoming the limitations of scarce speech-labeled datasets
Unlike prior methods, MoCha operates end-to-end without auxiliary control signals, simplifying the animation creation process
The technology supports multi-character conversations with turn-based dialogue, enabling context-aware interactions
Human preference studies and benchmark comparisons show MoCha achieves superior realism and generalization compared to existing methods
Filmmaker and photographer Florent Piovesan of Of Two Lands shares his first impressions of the Blackmagic Design URSA Cine 17K 65 during a recent shoot in Iceland.
Stories, projects, and links that caught our attention from around the web:
🎥 Foundry launches Nuke Stage, a purpose-built tool bringing real-time compositing to virtual production and bridging the gap between VFX and ICVFX.
🌟 The Aputure Storm XT52 is now available, offering 5200W of high-output LED lighting with a wide color temperature range of 2500K-10000K, making it a powerful addition for large-scale film and TV productions.
🤸♂️ Mo-Sys has introduced the StarTracker Mini, an ultra-compact camera tracking system that brings dependable and precise tracking technology to smaller studios and educational settings.
🗄️ OWC’s will feature new storage solutions like the ThunderBlade X12, which supports high-capacity storage and fast speeds ideal for video editing and production.
☁️ The integration of MASV with Frame.io V4 will improve media production workflows by enabling faster and more reliable capture-to-cloud file transfers, making it easier for teams to manage large-scale projects.
🗂️ OpenDrives is upgrading its Atlas storage platform with new composable bundles that offer media organizations cost predictability and scalable performance without unnecessary features.
⚡️ By integrating with platforms like Frame.io and Qumulo, MASV streamlines media workflows, offering scalable and secure data transfers that aid in faster content delivery.
📱 Lightcraft Jetset has updated its iPhone app with key features like live cinematic compositing and Gaussian Splat support, enhancing virtual production capabilities for filmmakers.
👩🏻💻 Maxon's latest updates to Maxon One significantly enhance creative tools for VFX and 3D artists, offering new features across Cinema 4D, ZBrush, and Red Giant that improve workflow efficiency and artistic flexibility.
🕹️ Epic Games has acquired Loci, a leading developer of AI technology for 3D content understanding, to enhance asset management and discoverability across their ecosystem.
🌄 Cinnafilm and NVIDIA have developed an AI-powered HD to UHD upconversion tool that significantly enhances video quality and speed, setting a new industry benchmark.
Art Director, Generative AI - Avatars
Meta
Los Angeles, California
XR Instructor
Immxrsive - National Centre of Excellence for Immersive Tech
Ontario, Canada
AR/VR Intern
LumeXR
Kerala, India
Virtual Production Intern
Orbital Studios
Los Angeles, California
April 6 to 9
NAB Show Las Vegas
Las Vegas, NV
April 9 to 10
Virtual Productions Gathering 2025
Breda, Netherlands
June 5 to 8
Cine Gear Los Angeles Expo 2025
Los Angeles, CA
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.
Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.
Reply