
Welcome to VP Land! Adobe has made a significant move in mobile photography with the quiet launch of Project Indigo, a free computational camera app for iPhone.
Last week, we asked about using AI-generated videos for client/personal projects. Most of you were split between 'Testing soon' and ‘Waiting for maturity.' Check out today's poll below.
In today's edition:
Adobe quietly launches Project Indigo for iPhone
Arcads AI solves consistent AI actors challenge
ByteDance's Seedance 1.0 leads AI video generation
Magic Lantern firmware returns for Canon DSLRs

Adobe's Secret iPhone Camera App

Adobe quietly released Project Indigo, a free computational photography camera app that transforms how creators capture images on iPhone. The app combines advanced multi-frame processing with professional manual controls to deliver SLR-quality results from mobile devices.
Multi-frame processing captures up to 32 underexposed images per shot and combines them algorithmically to reduce noise and increase dynamic range, delivering cleaner shadows and better highlight retention than standard smartphone cameras.
The app offers full manual controls over focus, ISO, shutter speed, and white balance while maintaining computational benefits for both JPEG and RAW (DNG) outputs, giving photographers professional flexibility without sacrificing image quality.
Digital zoom uses multi-frame super-resolution technology that captures multiple slightly offset images and fuses them to reconstruct genuine detail, avoiding the artificial artifacts common in AI-based upscaling methods.
Long Exposure mode enables creative motion blur effects and light painting techniques typically requiring DSLR cameras and neutral density filters, expanding creative possibilities for mobile filmmakers and photographers.
Direct integration with Lightroom Mobile allows seamless editing workflows, with the app automatically launching Lightroom for immediate post-processing of captured images.
SPONSOR MESSAGE
Vu Studio: Create at the Speed of Thought

Vu Studio transforms virtual production into an accessible, all-in-one platform for filmmakers of all levels. This comprehensive software suite brings together storyboarding, environment building, previsualization, and shooting capabilities in a single intuitive interface – something never before available in one place.
Key features include:
SceneForge Studio with powerful toolset for building sets, designing camera moves, and lighting scenes
AI-powered content generation that transforms ideas into instant visual representations using natural language processing
Extensive asset library pre-loaded with hundreds of free elements, plus marketplace integration
Vu Media Server for seamless content casting to LED walls with adjustable brightness, scale, color and more
Integrated lighting control to manage set lights directly within the application
The platform integrates with industry-standard tools like Frame.io, streamlining collaboration and feedback across your production team. Whether you're creating storyboards, building virtual environments, or controlling multiple display surfaces, Vu Studio puts everything at your fingertips.
Ready to create content at the speed of thought? Start your Vu Studio journey today at vu.network/vu-studio.
Mention "VP Land" when subscribing for an exclusive 10% discount on annual subscriptions.

Spline Path Control Brings Fine Tune Control to AI

A new open-source tool called Spline Path Control v2 lets creators animate objects along customizable motion paths without writing complex prompts or code. The free tool integrates directly with ComfyUI, making AI-powered video animation more accessible through visual drag-and-drop controls.
The tool's multi-spline editing feature allows creators to control multiple objects simultaneously, enabling complex choreography where different elements move along independent paths within a single scene.
You can export your animations as .webm files and integrate them seamlessly with AI video models like Wan2.1, creating sophisticated motion sequences from static images.
The visual interface eliminates the need for prompt engineering or coordinate-based scripting, making advanced animation techniques accessible to non-technical creators who previously struggled with text-based motion control.
Community response has been enthusiastic, with creators praising the "no extra prompting" approach as a major breakthrough for democratizing procedural animation in AI-driven workflows.
The open-source release enables developers to contribute improvements and adapt the tool for specialized pipelines, positioning it as a potential standard in the rapidly evolving AI animation ecosystem.
Magic Lantern Returns Supporting New Canons

The legendary open-source firmware that transforms Canon DSLRs into pro-level filmmaking tools has officially returned after years of dormancy. Magic Lantern announced new builds with preliminary support for Canon EOS 200D, 750D, 6D Mark II, and 7D Mark II cameras.
RAW video recording becomes possible on budget Canon bodies that never officially supported it, plus advanced features like focus peaking, zebras, and custom overlays that rival expensive cinema cameras.
The new builds mark Magic Lantern's first leap beyond classic DIGIC 5 hardware, with developers finally cracking the technical hurdles posed by Canon's newer firmware architectures.
Early testing shows the Canon 200D delivers two stops more dynamic range than the EOS M at ISO 100, and even outperforms the 5D Mark III at lower ISO settings.

Edward Dawson-Taylor dives into how AI is reshaping VFX and virtual production—Gaussian Splats, real-time pipelines, and generative tools for all. Plus: integration hurdles, creative workflows, and where filmmaking meets machine intelligence.

Stories, projects, and links that caught our attention from around the web:
🤖 Cineverse has launched cineSearch for Business, an AI-powered tool designed to enhance content search and discovery for digital platforms and streaming services.
🕶️ Snap aims to challenge competitors like Apple and Meta by launching its own consumer AR glasses next year.
💻 Apple has removed FireWire support from macOS 26, marking the end of an era for the once-popular "USB killer" interface.
📸 Peak Design has officially launched its Pro Tripod, offering advanced stability and portability features for filmmakers and photographers.
🎓 Panasonic LUMIX has launched its LUMIX EDU program, offering students and educators nationwide access to educational resources and discounts on camera gear.

In the latest Denoised episode, we covered a range of updates in AI last week.
Midjourney V1 launches as an image-to-video tool that creates four five-second clips at 480p, marking the company's transition from static image generation to video synthesis.
MiniMax Hailuo 02 introduces physics-aware rendering and native 1080p output with 2.5x better inference throughput than Google's Veo, supported by a dataset four times larger than its predecessor.
ByteDance Seedance 1.0 Pro reportedly outperforms Google's Veo 3 in key benchmarks, emphasizing controllable video generation for professional use despite higher operational costs.
Black Forest Labs continues advancing their FLUX line with the new Kontext model, while Arcads AI tackles character consistency challenges that have plagued AI-generated video content.
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
Virtual Production Instructional Specialist
College of Motion Picture Arts - Florida State University
Tallahassee, FL
AR/VR Intern
LumeXR
Kerala, India
Virtual Production Intern
Orbital Studios
Los Angeles, California

📆 Upcoming Events
July 8 to 11
AI for Good Film Festival 2025
Geneva, Switzerland
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.