In partnership with

Welcome to VP Land! And welcome back to the internet after AWS decided to CTRL+ALT+DEL. Big week for open source releases and workflow tools that bring AI directly into creative pipelines.

In our last poll, we asked what you thought of House of David's hybrid AI workflow. Results were split — some saw it as a solid case study with show-specific results, while others felt there’s still plenty to figure out. Check out today’s poll below.

In today's edition:

  • Channel 4's AI anchor fools viewers

  • KREA open-sources 14B video model

  • After Effects plugin integrates AI models

  • Testing Wan 2.2's animation control

Channel 4 Fooled Viewers With AI News Anchor

Britain's Channel 4 aired a documentary on Monday with an AI-generated news anchor that viewers didn't realize was fake until the final reveal—a stunt designed to demonstrate how easily audiences can be deceived by synthetic media.

  • The reveal - At the end of "Will AI Take My Job?", the host disclosed on-camera: "I'm an AI presenter. Some of you might have guessed: I don't exist, I wasn't on location reporting this story. My image and voice were generated using AI."

  • Production details - The AI anchor was created by AI fashion brand Seraphinne Vallora for Kalel Productions, guided by prompts to generate a realistic on-camera performance throughout the hour-long special

  • Industry survey data - The documentary explored findings showing 76% of U.K. business leaders have already adopted AI for tasks previously done by humans, with 41% reporting reduced recruitment and nearly half expecting further staff cuts within five years

  • Network position - Channel 4's head of news, Louisa Compton, said they won't make AI presenters "a habit," emphasizing their focus remains on "premium, fact-checked, duly impartial and trusted journalism—something AI is not capable of doing."

  • Compliance note - The stunt followed Channel 4's editorial guidelines for ethical AI use, with the end reveal designed to make viewers reflect on trust and authenticity issues

The timing follows the Tilly Norwood controversy—an AI-generated "actress" that sparked backlash from SAG-AFTRA, which called it a "character generated by a computer program that was trained on the work of countless professional performers—without permission or compensation."

SPONSOR MESSAGE

The Tech newsletter for Engineers who want to stay ahead

Tech moves fast, but you're still playing catch-up?

That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.

Here's what you get:

  • Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.

  • Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.

  • Research papers and insights decoded - We break down complex tech so you understand what matters.

All delivered twice a week in just 2 short emails.

KREA Open Sources Realtime Video Model

KREA AI open-sourced Krea Realtime, a 14-billion parameter autoregressive model that generates long-form video at 11 frames per second on a single NVIDIA B200 GPU—making it 10x larger than any comparable open-source video generation model.

  • Model size - At 14B parameters, this is significantly larger than most open-source video models, though the company didn't specify which models they're comparing against

  • Performance specs - Generates long-form video at 11 fps on a single B200 GPU, positioning it for real-time or near-real-time generation workflows

  • Availability - Model weights and technical report are now publicly available for developers and researchers to download and implement

  • Autoregressive approach - Uses an autoregressive architecture for video generation, which predicts each frame based on previous frames rather than generating all frames simultaneously

This release comes as the open-source AI community continues pushing for transparency in model development, though without access to the technical report or comparisons to specific competing models, it's difficult to assess where Realtime actually sits in the landscape. The B200 requirement suggests this isn't aimed at consumer hardware—those GPUs run around $30,000-40,000 each—but could be accessible for studios and research labs already invested in high-end infrastructure.

After Effects Plugin Integrates Nano Banana & More

Eric Day, a founding partner at Asteria Film Co, built an After Effects plugin that integrates five AI image generation models directly into the timeline—eliminating the constant workflow friction of exporting frames, uploading to web tools, downloading results, and re-importing.

Key Details:

  • Five AI models included - Access Flux Kontext Max, Seedream 4.0, Nano Banana, Qwen Edit Plus, and Flux Fill Pro through Replicate's API without leaving After Effects

  • One-click frame extraction - Pull any frame from your timeline without manually exporting it

  • Natural language edits - Describe transformations in plain text and the AI applies them to your keyframes

  • Auto-import system - Modified frames drop back into your composition automatically, respecting work areas and comp settings

  • Inpainting with masks - Create custom masks for targeted AI edits within specific areas of frames

  • Reference image consistency - Maintain visual consistency across multiple frames using a reference image system

Available now through Gumroad.

Filmmaker Stress Tests Wan 2.2 Animate

Filmmaker Albert Bozesan put Wan 2.2 Animate through its paces, testing its claim to transfer body and facial movements from real footage onto AI-generated images—specifically turning his office performance into a 1940s film poster illustration style.

What Works:

  • Body control - Outperforms paid competitors like Runway Act Two for complex movements and prop handling (Bozesan's test included towel manipulation and detailed gestures that translated cleanly)

  • Closeup and medium shots - Handles detailed actions well at these distances

  • Single-camera workflow - No specialized equipment required, just standard video reference

What Doesn't:

  • Facial consistency - Characters' faces warp dramatically compared to reference images; Clark's hair color shifts between shots

  • Wide shots - Break down completely due to resolution limitations

  • Requires tool mixing - Bozesan had to patch in Kling 2.5 i2v for insert shots and InfiniteTalk for lip sync when Wan melted facial features

Bottom Line: Wan 2.2 Animate delivers on body motion transfer in ways existing tools don't—particularly for directors wanting precise gesture control in stylized animation. But it's a multi-tool workflow, not a one-stop solution. You'll need to composite around its facial and wide-shot limitations, which makes it more "powerful ingredient" than complete pipeline.

Watch as NetworkChuck tests NVIDIA's palm-sized DGX Spark AI supercomputer against his dual RTX 4090 server to see if the compact Grace Blackwell Superchip can compete with traditional high-end GPUs for running large AI models.

Stories, projects, and links that caught our attention from around the web:

🍌 Nano Banana Tip: Creative Director Henry Daubrez discovers "SHOW ME" is Nano Banana's magic prompt phrase for consistent results with camera angles and character variations

🎬 Parrot Analytics data shows movies jumped from 27% to 50% of streaming revenue between 2022-2024, driven by catalog films that keep subscribers from churning.

🤖 Microsoft debuts MAI-Image-1, its first in-house AI image generator, reducing dependence on OpenAI for photorealistic text-to-image creation.

🎭 Zoe Saldaña wants Cameron documentary proving motion capture gives actors "100 percent ownership" of performances—a claim that could shape SAG's fight over AI synthetic performance rights.

🎙️ Netflix and Spotify cut a deal to bring video podcasts, including Bill Simmons' The Ringer shows, to streaming subscribers starting early 2026.

Addy and Joey cover Google's Veo 3.1 updates, Runway's new VFX apps, Netflix Eyeline's video reasoning research, and NVIDIA's DGX Spark supercomputer.

Read the show notes or watch the full episode.

Watch/Listen & Subscribe

👔 Open Job Posts

Virtual Production (VP) Supervisor/Specialist - FT
Public Strategies
Oklahoma City, OK

Virtual Production Intern
Orbital Studios
Los Angeles, CA

📆 Upcoming Events

October 24
Generative Media Conference
San Francisco, CA

April 18-22, 2026
NAB Show
Las Vegas, NV

July 15-18, 2026
AWE USA 2026
Long Beach, CA

View the full event calendar and submit your own events here.

Thanks for reading VP Land!

Thanks for reading VP Land!

Have a link to share or a story idea? Send it here.

Interested in reaching media industry professionals? Advertise with us.

Reply

or to participate

Keep Reading

No posts found