
Welcome to VP Land! Nano Banana is officially out! We’ve been gathering the best tips and use cases, which we share below. We also tested it out ourselves on a podcast screengrab - changing angles, backgrounds, props. It performed exceptionally well.
Last week’s poll ended in a tie. Some called Netflix’s GenAI rules a step forward, others said they simply formalize the norm. Check out today’s poll below.
In today's edition:
Nano Banana tips
Alibaba’s Wan drops another AI model - character animation
Iodyne drops the first episode of their NAB recorded Workflow Kitchen
Veo 3 fast goes unlimited

We gathered the best Nano Banana tips and use cases

We've been tracking the mysterious Nano Banana model for a while, but now Google has officially released it into the world. Technically called Gemini 2.5 Flash Image, it shows some of the best, most advanced AI-powered image editing that lets you edit photos using plain English commands.
Some impressive use cases we've seen around the web
Sketch / Annotate to Prompt: Turn sketches into images, or mock up an image with notes, and Nano Banana will follow the instructions. Freepik just released a new tool that lets you do this right in the app.
Stick Figure Annotation: Map out poses and actions with stick-figure mark-ups.
Photo Restoration: Repairs and reimagines old, damaged, or low-quality images.
Isometric Model From a Single Photo: Prompt "Make an isometric model of the [object] only."
13 Reference Photos in 1 Shot: Digital artist Travis Davids was able to take 13 props and merge them into one single image, following specific instructions.
Best Practices
Google DeepMind's Philipp Schmid dropped a list of recommendations when using Nano Banana.
Here are a few of our favorites:
Be hyper-specific → Add detailed descriptions for precision (e.g., not just “fantasy armor” but “ornate elven plate armor with silver leaf etchings, high collar, and falcon-wing pauldrons”).
Provide context and intent → State the purpose of the image (e.g., “logo for a high-end, minimalist skincare brand” vs. just “logo”).
Use semantic negative prompts → Frame exclusions positively (e.g., instead of “no cars,” say “a deserted street with no signs of traffic”).
Aspect ratios → With multiple uploads, it defaults to the ratio of the last image provided.
And lastly - for anyone who wants to use Flux or Nano Banana directly in Photoshop, Rob de Winter released an AI script for Photoshop.
SPONSOR MESSAGE
Find your next winning ad creative in seconds with AI
Most AI tools promise you thousands of ads at the click of a button. But do you really need more ads—or just better ones?
Kojo helps you cut through the noise. We analyze your paid social data to uncover the ideas with the highest chance of success. Then, our AI predicts which concepts will perform best, so you don’t waste budget testing what won’t work.
Instead of drowning in endless variations, Kojo sends your best idea straight to a real human creator who makes it engaging, authentic, and ready to win on social. The entire process takes less than 20 seconds, giving you certainty before you spend and better performance without the waste.
Why gamble on guesswork or settle for AI spam when you can launch ads proven to work, made by people, and backed by data?

Wan 2.2 Can Now Make Talking Avatars

Alibaba is on a roll with yet another new Wan 2.2 model: talking avatars.
Wan2.2-S2V is an open-source AI model that turns static photos and voice recordings into full-body animated videos.
The model handles full-body animation from portrait shots to complex multi-character scenes, supporting human, cartoon, animal and stylized avatar types.
You can download it free from GitHub, Hugging Face, and ModelScope, where the Wan series has reached 6.9 million downloads by mid-2025. It's also available as a native ComfyUI workflow.
It outputs 480p and 720p video in both vertical and horizontal formats, making it suitable for social media content and professional film work.
The AI processes up to 73 frames of video history using advanced compression techniques, enabling stable long-form animations that previous models couldn't handle.
Training on over 600,000 audio-video segments gives it strong performance across dialogue, singing, and complex emotional expressions with high identity consistency.
Also in the generative audio sphere, Tencent's Hunyuan Lab launched HunyuanVideo-Foley, an open-source AI framework that generates synchronized audio directly from video and text inputs.
Workflow Kitchen Tackles iPhone Movie Production

iodyne launched Workflow Kitchen, a new content series exploring data workflows in media production. The first episode features USC Entertainment Technology Center (ETC) experts discussing how to shoot entire films using iPhones enhanced by AI-powered post-production.
Here are some key insights from the premier episode:
AI + iPhone workflows are redefining filmmaking — The ETC is experimenting with tethered iPhone rigs and AI-powered pipelines to prove that studio-level filmmaking can be achieved on consumer devices.
Creative + technical fusion — Filmmakers are trained to integrate cutting-edge tech (AI, volumetric capture, multi-camera data flows) into storytelling, requiring scripts and production design to flex around what’s possible without losing creative intent.
Data and workflow discipline are crucial — With AI and hybrid workflows, managing massive amounts of metadata and spatial data demands pre-planned pipelines and clear destinations, ensuring efficiency and avoiding chaos on set.

Mickmumpitz’s latest AI-VFX workflow merges live-action with animated AI worlds while preserving camera motion and characters, all inside ComfyUI with local models.

Stories, projects, and links that caught our attention from around the web:
⚡ Krea announces Realtime Video, video generation with consistent motion, identity, and style—waitlist now open.
🚀 No credits required: Google AI Ultra subscribers can now generate unlimited Veo 3 Fast videos.
📱 Tech brand Nothing was caught publishing stock images falsely labeled as official Phone 3 camera samples.
📽️ Warner Bros. will provide a VistaVision projector at the Vista Theater, allowing audiences an exceptionally rare screening in this historic widescreen format.

Addy and Joey break down Netflix’s generative AI rulebook, Cosm’s Matrix experience, and how agents are using AI to help their actors (but they might not be getting the most accurate info).
Read the show notes or watch the full episode.
Watch/Listen & Subscribe

👔 Open Job Posts
🆕 Creative Technologist – Virtual Production & Experiential
The Garage
Brooklyn, NY
🆕 Virtual Production (VP) Supervisor/Specialist - FT
Public Strategies
Oklahoma City, OK
Architect (Rhino/Grasshopper/Revit/Blender)
Runway
Remote
VFX Artist (Runway/Flame/Nuke)
Runway
UK
Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events
September 23 to 24
CFX 2025
Chattanooga, TN
October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA
View the full event calendar and submit your own events here.


Thanks for reading VP Land!
Thanks for reading VP Land!
Have a link to share or a story idea? Send it here.
Interested in reaching media industry professionals? Advertise with us.