First up in this week’s Denoised roundup: Wan 2.5 from Alibaba. In this episode, Addy and Joey walk through the key announcements and tests, then moves through a packed list of creative tools and model updates that matter for filmmakers and video creators. The hosts cover Qwen-Image-Edit-2509, Google’s Mixboard and Flow updates, Nano Banana integrations, new Topaz upscalers, Figma + Claude Code, Freepik Spaces, and Suno v5 for AI music.
Wan 2.5 by Alibaba
Joey opens the episode with Wan 2.5 — the newest preview release from Alibaba that looks like a near‑term competitor to V3-style multimodal models. The hosts stress a key caveat: Wan 2.5 is not yet available as downloadable, open weights. Right now it’s accessible through Alibaba’s API, website, and partner integrations only. Joey frames that as a likely product strategy: ship preview versions to collect usage data and reserve the heavier distribution for later.
What Wan 2.5 brings to the table is a unified model that accepts text or image inputs and produces video with integrated audio and speech. Joey emphasizes the multimodal angle — training on audio, video, and text together — and what that unlocks for filmmakers. The hosts outline two concrete creative possibilities:
Digital likenesses: combine an actor’s visual and vocal samples to produce a consistent digital performer who can speak and move in generated footage.
World‑building with sonic identity: train a model on a visual style and matching sound palette so a generated scene comes with consistent foley and atmospherics (think signature sound design across a franchise).
Practical notes from the test runs: Joey ran short 5–10 second clips in Comfy using Wan 2.5. Output options included 480p up to 1080p, horizontal and vertical framing, and a choice to include audio or not. Default outputs had a watermark that can be disabled. Cost per 10‑second 1080p clip during the preview was reported to be around one dollar — significantly cheaper than comparable cloud options — though that may change once product tiers mature.
What filmmakers should care about
Wan 2.5 could reduce roundtrips between separate audio and video tools if it lands as an open, fine‑tunable model.
Fine‑tuning and LoRA style controls (Joey’s shorthand for customizable tuning layers) are the differentiator — the ability to combine environment, actor likeness, and other tuned layers matters more than raw render quality.
For now, access via API and partner apps favors cloud deployments and studio services rather than local test rigs.
Qwen-Image-Edit-2509 (Alibaba)
Immediately after Wan 2.5 the hosts flag an update to Qwen’s image editor: Qwen‑Image‑Edit‑2509. Joey notes the team rewrote large portions of the model and now supports text‑based editing at a level comparable to current image-editing models — but available as an open source offering.
The Qwen update includes strong text adherence for edits, and comfy workflows are already available for people who want to integrate it into custom pipelines. Joey highlights an ethics‑adjacent example from the Qwen blog where arbitrary people are composited into wedding photos — a reminder that image editors are moving fast and creative control can be used in unexpected ways.
Why this matters
Open image editors that match proprietary quality reduce friction for previsualization and moodboard work.
Comfy integrations make it straightforward to add pose nodes and other deterministic controls to guide final outputs for production needs.
Google Mixboard
Joey and Addy describe Google Mixboard as a lightweight ideation board that sits somewhere between Miro and the Nano Banana ecosystem. Mixboard is built to keep image assets and generated variations in one place, letting users highlight multiple images as source material and quickly request new variations from the running model (Nano Banana in Google’s stack).
The hosts call out the convenience factor: no downloads or reuploads, instant generation from selected assets, and simple organization for shot‑level ideation. Joey notes it’s currently a Labs feature with unclear usage limits, but it’s a good way to iterate ideas fast.
Filmmaker takeaway
Mixboard is useful for early concepting, mode tests, and rapid shot exploration — especially when collaborating with art departments and ADs on visual styles.
It’s not a replacement for detailed pipeline tools, but it speeds the “what if” stage and keeps source metadata paired with outputs.
Google Flow: Prompt Expanders & Integrations
Next, the episode covers updates to Google Flow, Google’s web tool for V3‑style video creation. The headline here is “prompt expanders” — a simple but useful feature that lets users save persistent prompt presets (scene look, location, style) and apply them consistently across scene generations. Joey frames this as an evolution toward project memory inside creative tools.
Flow also integrates Nano Banana directly for in‑tool frame edits, reducing roundtrips. The hosts mention that Nano Banana is now present inside Photoshop beta as a layerable generation option with harmonize blending — a valuable last‑mile fix for many editors and compositors.
Why prompt expanders matter
Preserving consistent style across shots is a common pain point for AI workflows; built‑in presets simplify that work.
Direct model integrations inside the editing environment cut export/import overhead and keep creative intent intact.
Adobe Firefly Boards and Freepik Spaces
The hosts cover two board‑style products in parallel: Adobe Firefly Boards and Freepik’s upcoming Spaces. Joey says Firefly Boards is Adobe’s best Firefly product to date — a moodboard generator that tracks prompts and model provenance so assets carry metadata when moved between apps. Freepik Spaces promises a node‑style board inside Freepik, with a waitlist in place.
Bottom line: mood‑board tools that track prompt and model lineage help teams keep a traceable asset history — useful for both creative continuity and compliance when model provenance matters.
Production impact
Boards that save prompt history and model versions are helpful for VFX bids, director approvals, and rights management.
Collaboration features (project buckets, asset permissions) remain an area where many tools can improve.
Topaz Labs: Starlight Sharp and NYX XL
The roundup then shifts to Topaz Labs, which released two new video restoration models: Starlight Sharp and NYX XL. Joey explains Starlight Sharp as the newest Starlight family member — a diffusion‑based restorer that “infills” missing detail and sharpens footage. NYX XL focuses on denoising while preserving detail.
Joey shares hands‑on impressions: Starlight works best on close to medium shots where facial detail or subject clarity matters. Wide shots with sparse detail (beach scenes, tiny crowd members) still present problems because there’s simply not enough original information for the model to reconstruct convincingly.
How editors and post teams should think about these tools
Use Starlight for character closeups and interview restoration; expect mixed results on wide, low‑detail frames.
NYX XL is a good option for noisy smartphone or CCTV footage where denoising without losing texture is the priority.
Topaz moving toward cloud deployments reduces the need for workstation GPUs, which widens accessibility for smaller teams.
Figma + Claude Code Integration and MCP
Joey walks through a practical demo linking Figma to Claude Code via the Model Context Protocol (MCP). The idea: designers build interfaces in Figma, point Claude Code at the artboard, and get a generated app that respects the original UI. MCP is presented as a higher‑level communication layer that lets models map abstract intents to app data without strict API schemas.
The hosts discuss pros and cons: MCP is powerful for fuzzy, natural language tasks but can be slower or less deterministic than direct API calls for bulk, fast operations. Joey shares a troubleshooting anecdote where he ended up asking Cloud Code to generate a Python script that used a standard API because speed and determinism mattered for an Airtable automation.
Why this matters to studios and product teams
MCP opens doors for richer assistant-style integrations in tools like Maya, Unreal, or Houdini — the promise is intuitive, prompt-driven automation for complex pipelines.
Teams should pick MCP for abstract tasks and APIs for high‑throughput, well‑defined operations.
Suno v5 and the AI Music Debate
The episode closes with Suno v5 — the controversial music platform releasing a new version. Joey summarizes what little is publicly stated: better clarity and vocal control, improved stems/separation, and cleaner high‑frequency vocals. In practical use, Joey says Suno is useful for quickly prototyping beats and vocal ideas, and can even turn scratch singing into fuller vocals.
The hosts use this segment to surface a larger ethical and creative question: AI can shortcut years of craft in music with one prompt. Joey and Addy debate the implications — whether AI‑generated tracks dilute value, how platforms like Spotify are responding with labeling policies, and the role of live performance as a buffer for human musicians.
What creative teams should note
AI music is moving toward more usable outputs, but provenance and labeling policies will affect distribution and licensing.
For filmmakers, AI music offers rapid temp tracks and concept scores, but care is required for final sync and rights clearance.
Closing thoughts
The week’s roundup shows two clear trends relevant to film production: models and tools are increasingly integrated (multimodal outputs, in‑app model integrations), and product strategies favor staged previews with cloud access before open weights are released. Joey repeatedly highlights the importance of control — whether through fine‑tuning layers, project‑level prompt presets, or composable workflows — as the deciding factor that separates useful studio tools from novelty outputs.
For filmmakers and post teams the immediate takeaway is pragmatic: experiment with these previews and board tools for ideation and temp assets, use restoration/upscaling selectively where it serves story and image fidelity, and plan for mixed deployment strategies (local for offline work, cloud for scale). The AI ecosystem is noisy — new models every week — but the portions that will stick are the ones that let creative teams keep control and reduce friction between idea and screen.


