
Welcome to VP Land! It's a huge week for open-source AI, with major new tools pushing the boundaries of video animation and image editing. We're seeing some serious contenders for 'Nano Banana for X' as these powerful models become more accessible.
In last week’s poll, most of you felt that Luma AI’s Ray3, which delivers native HDR 16-bit video output, is promising, although it still requires more rigorous testing before full adoption. Check out today’s poll below.
In today's edition:
Alibaba Debuts Open-Source Wan2.2-Animate for Character Animation
Google's AI on Screen Premieres Sweetwater, Explores "Generative Ghosts"
Qwen-Image-Edit might be an open source 'Nano Banana' alternative
Decart Open-Sources Lucy Edit - text based video editing (aka the video version of 'Nano Banana')

Character Animation Gets Open Alternative with Wan2.2-Animate

Alibaba's Wan research group just released Wan2.2-Animate, a character animation model that's actually designed for animation workflows rather than text-to-video generation. Both the model weights and inference code are completely open source.
Character-focused architecture - Unlike most AI video models that treat everything as general motion, Wan2.2-Animate specifically targets character animation and replacement tasks. This means it's built to handle the frame-to-frame consistency issues that plague most AI animation attempts.
Full open source release - You get both the trained model weights and the actual inference code, not just API access or a web interface. This puts it in direct competition with closed solutions like Runway's Act Two, but without the subscription fees or usage limits.
Temporal coherence focus - The "high-fidelity" claim centers on maintaining character integrity across frames, addressing the common AI video problems of limb warping, face drift, and inconsistent proportions that make most AI-generated characters look uncanny.
Community integration potential - With full code access, expect community-built plugins for Blender, integration experiments with existing VFX pipelines, and the usual open source ecosystem development that makes tools actually useful for production work.
SPONSOR MESSAGE
Typing is a thing of the past
Typeless turns your raw, unfiltered voice into beautifully polished writing - in real time.
It works like magic, feels like cheating, and allows your thoughts to flow more freely than ever before.
With Typeless, you become more creative. More inspired. And more in-tune with your own ideas.
Your voice is your strength. Typeless turns it into a superpower.

Michael Keaton Douglas' New Google AI Film

Google's AI on Screen program just premiered its first commissioned short film, Sweetwater, starring and directed by Michael Keaton Douglas with a screenplay by his son Sean Douglas. The 15-minute film screened at Cinema Village in New York, followed by a father-son Q&A.
The story itself is surprisingly grounded: a celebrity's son returns to his childhood home and encounters an AI-powered hologram of his late mother, forcing him to confront his grief and her digital afterlife. It's the kind of premise that could easily go full Black Mirror, but the filmmakers seem more interested in genuine emotional complexity.
Key details:
The film explores what academics are calling "generative ghosts" - digital preservation and continued interaction with deceased loved ones
Sean Douglas wrote the script after his grandmother's passing, saying the personal meaning only became clear once he finished the first draft
Michael Keaton Douglas emphasized they wanted to examine "how does that apply to human emotion? Humans are still more complex" rather than treating AI as science fiction
This kicks off Google's partnership with Range Media Partners, with a second commissioned project called "Lucid" already in development
The film will hit the festival circuit next, though no specific Google AI models are confirmed (likely candidates include Veo 3 for video generation)
Alibaba's Qwen Tackles Multi-Image AI Editing

Alibaba just dropped Qwen-Image-Edit-2509—and yes, they really went with that mouthful of a name—promising "pixel-perfect control" for creators who want more than single-image AI editors can deliver.
The big story here isn't the features (though multi-image drag-and-drop compositing is genuinely useful). It's that this is positioning as a free, open-source alternative to paid, powerful models like Nano Banana and Seedream 4.0.
What's new:
Multi-image compositing: Drag in separate assets—person, product, background—and it blends them in a single canvas
Granular control: Claims "pixel-perfect" editing with detailed mask adjustment and selective region work
Creator-focused UX: Built for "tinkerers and designers" rather than one-click automation
Open source access: Part of Alibaba's Qwen model family, so no licensing headaches

Lucy Edit: 'Nano Banana' Video Model Goes Open

Decart announced on X they've open-sourced Lucy Edit, their first model on what they're calling the "path to Nano Banana for video." They've released both the model weights and a technical report, positioning this as the initial building block for lightweight AI video workflows.
Core capability - The model handles object replacement, clothing changes, background swaps, and style transfers using only text prompts. No masking or manual annotation required, which removes a major workflow friction point that's plagued previous tools.
Technical approach - Uses rectified flow with channel concatenation, essentially feeding the original video alongside the noisy sample during denoising. This keeps computational overhead minimal while maintaining precise spatial alignment between input and output frames.
Two-tier release - Lucy Edit Dev is the open-source version available now for researchers and developers to tinker with. Lucy Edit Pro offers higher capability through an API for production use, though pricing and availability details aren't public yet.
Identity preservation - The whitepaper emphasizes maintaining subject identity across transformations, addressing one of the biggest problems with current AI video editing where faces and key features often drift or change unintentionally.
Realistic integration - New elements reportedly respect scene lighting, perspective, and motion dynamics rather than looking pasted on. Added clothing deforms with body movement, inserted objects follow proper physics.

Marques Brownlee gives the Meta Ray-Ban Display glasses a hands-on test.

Stories, projects, and links that caught our attention from around the web:
👔 Christopher Nolan has been elected president of the Directors Guild of America, marking his first time leading the union.
🎬 Animeta launches its AI Film Studio to scale up content production and expand its reach in media.
🤖 AI-powered series Whispers debuts as an interactive experience at Busan’s ACFM, letting audiences shape the storyline.
🖼️ Meta will pay $140 million to license Black Forest Labs’ AI for advanced image generation features.
🤝 OpenAI partners with NVIDIA to roll out 10GW of GPU systems, accelerating global AI computing capacity.

👔 Open Job Posts
Virtual Production (VP) Supervisor/Specialist - FT
Public Strategies
Oklahoma City, OK
Architect (Rhino/Grasshopper/Revit/Blender)
Runway
Remote
VFX Artist (Runway/Flame/Nuke)
Runway
UK
Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.