In partnership with

Welcome to VP Land! Just announced before we hit publish: Veo3 has added image-to-video support. Now you can upload an image as your starting frame and include dialogue in the prompt to generate synchronized video and audio (huge for creating videos with consistent characters). Here’s a tutorial on how to set it up in Google Flow.

Last week, we asked if AI relighting tools are on your radar for your workflow. Responses were nearly evenly split—from must-have to meh. Check out today's poll below.

In today's edition:

  • Freepik launches unlimited AI image generation

  • CJ ENM builds internal AI tools, unveils first fully AI-generated animation series

  • Greyscale Labs releases DaVinci Resolve virtual haze plugin

Freepik Removes All AI Image Generation Limits

Freepik announced unlimited AI image generation for Premium+ and Pro subscribers, removing all caps, tokens, and waiting periods across multiple leading AI models. The announcement marks a significant shift in how creators approach AI-generated content without credit anxiety.

  • Multiple model access means creators can generate unlimited images using Mystic, Google Imagen, Flux, Seedream, Ideogram, Runway References, and OpenAI's GPT models all within a single subscription.

  • The psychological impact eliminates the "generation anxiety" that previously caused creators to overthink each prompt, allowing for true creative experimentation and rapid iteration without cost concerns.

  • Freepik's Pro tier at $250/month targets professionals with additional perks like early access to advanced models such as Veo3 and full merchandise licensing rights for selling physical goods, while the Premium+ plan comes in at $39/month.

  • The platform's aggregated approach differentiates it from competitors, which typically offer single proprietary models with usage restrictions.

  • Infrastructure cost reductions and strategic partnerships with AI model providers have made this unlimited offering financially viable, suggesting broader industry trends toward removing artificial barriers.

SPONSOR MESSAGE

Stay up-to-date with AI

The Rundown is the most trusted AI newsletter in the world, with 1,000,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg, Demis Hassibis, Mustafa Suleyman, and more.

Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

New DaVinci Plugin Creates Virtual Haze

Greyscale Labs launched Nano, a new DaVinci Resolve plugin that creates virtual haze effects using highlight-driven, depth-based processing.

  • The plugin generates atmospheric haze effects digitally within your scenes, eliminating the need for physical haze machines or practical effects during shooting.

  • System requirements include macOS 13.5+ or Windows with an RTX 30-Series or newer GPU, making it accessible to most modern editing setups.

  • The depth-based processing allows the plugin to create realistic volumetric effects that respond to the existing lighting and depth information in your footage.

  • At $109 for a perpetual license, it positions itself as an affordable alternative to more expensive volumetric effect solutions or rental costs for haze machines.

Korean Animation Shop Goes AI

Korean entertainment giant CJ ENM has unveiled its comprehensive AI content strategy with the debut of Cat Biggie, the first fully AI-generated animation series from a major studio. The 30-episode non-verbal series follows a cat caring for a baby chick and launches globally on YouTube in July 2025.

  • CJ ENM's proprietary Cinematic AI and AI Script tools enabled a team of just six specialists to complete the entire series in five months, including content planning and character development.

  • The production timeline represents a massive efficiency gain compared to traditional 3D animation, which typically requires three to four months for a single 5-minute episode.

  • The company plans to expand beyond animation with AI-generated films and dramas scheduled for release later in 2025, positioning itself as a global AI studio.

  • The series uses non-verbal storytelling to bypass linguistic barriers, making it universally accessible while testing direct-to-consumer distribution strategies outside traditional streaming platforms.

Midjourney + Luma AI Modify Video Tutorial

Creator Jon Finger shared a step-by-step workflow that combines Midjourney's image generation with Luma AI's Modify Video.

  • Shoot your base video first, then capture a screenshot of the opening frame to use as your editing foundation for the entire transformation process.

  • Upload the first frame to Midjourney and apply heavy personalization settings (--exp 1000) to create a highly stylized version that will guide the video's new aesthetic direction.

  • Process your original video through Luma AI's Modify Video feature by uploading both your source video and the Midjourney-edited first frame together.

  • Adjust the strength parameter in Luma AI to control how the output differs from your original video, with higher settings creating more radical visual transformations.

Watch a single-shot drone flight that captures the full ascent of Everest, filmed with the DJI Mavic 4 Pro.

Stories, projects, and links that caught our attention from around the web:

🎬 Manus AI has launched Manus video generation, a new tool that transforms user prompts into fully structured, sequenced, and animated stories ready for viewing.

👾 A24 is partnering with teenage filmmaker Kane Parsons to produce a feature adaptation of his YouTube horror series titled The Backrooms.

🎤 CineD has a behind-the-scenes look at the making of the new Apple Vision Pro film featuring U2 frontman Bono

🤖 Anthropic has introduced Claude Artifacts, enabling anyone to create AI-powered applications directly in Claude with simple text prompts.

🗣️ Adobe has introduced Firefly Voice, an AI-powered tool that turns text prompts (and your voice) into realistic sound effects for use in film and media production.

Addy and Joey break down 11 cutting-edge AI tools reshaping filmmaking—from Claude’s AI-powered app builder to Adobe’s voice-driven sound effects generator—and explain which ones actually matter for creators navigating today’s fast-moving content landscape.

Read the show notes or watch the full episode.

Watch/Listen & Subscribe

👔 Open Job Posts

Virtual Production Instructional Specialist
College of Motion Picture Arts - Florida State University
Tallahassee, FL

AR/VR Intern
LumeXR
Kerala, India

Virtual Production Intern
Orbital Studios
Los Angeles, California

📆 Upcoming Events

July 8 to 11
AI for Good Film Festival 2025
Geneva, Switzerland

September 23 to 24
CFX 2025
Chattanooga, TN

October 3 to 4
Cine Gear Atlanta Expo 2025
Atlanta, GA

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate

Keep Reading

No posts found