In this week’s episode of Denoised, Joey breaks down the most relevant announcements from Adobe MAX 2025 and explains what they mean for filmmakers, editors, and VFX artists. Highlights include Firefly 5, Premiere’s AI object mask, new Firefly tools for video and audio, practical Photoshop features that speed up everyday work, experiments in 3D and relighting, and Frame.io updates that aim to make review and collaboration smarter.
1. Firefly 5: better portraits and more reliable under the hood
Joey opens with the headline everyone expected: Adobe updated its Firefly model to version 5. The hosts note the distinction between Firefly the product ecosystem and Firefly the commercially safe model. Firefly 5 focuses on improving people and portrait generation, which directly benefits features already using Firefly under the hood—features like Photoshop’s Harmonize.
Why this matters for filmmakers: Harmonize and similar features get more reliable when the foundation model handles human subjects better. That reduces time spent fixing artifacts, skin tone mismatches, and lighting problems during comps and plate blending.
Key takeaway
Expect higher quality when Firefly is used for people-related tasks like harmonizing a subject into a new scene.
Commercially safe training remains a differentiator for studios needing clear licensing.
2. Model integrations and Topaz generative upscaling
Photoshop now exposes additional generative models like Nano Banana and FLUX Kontext (previously beta-only), and Adobe announced a partnership with Topaz for generative upscaling built into Photoshop, Firefly, and Firefly Boards.
For filmmakers working with mixed-resolution assets, Topaz’s AI-powered upscaler provides a sharper and more plausible detail lift than classical interpolation math. The hosts point out the practical workflows: generate at lower resolution for speed and cost, then upscale selectively—useful when budgets or render times are constrained.
Key takeaway
Generative upscaling can preserve or add detail where math-based upscalers struggle.
Even as native high-res models evolve, upscalers will remain valuable for crops, retouches, and faster iteration.
3. Premiere public beta: AI object mask
One public beta that immediately grabbed the hosts’ attention is Premiere’s AI object mask: an automatic tool that identifies and isolates people and objects so editors can edit and track them without manual rotoscoping.
For editorial and VFX pipelines this is huge. Manual rotoscoping is one of the most time-consuming, error-prone tasks in finishing. A credible automatic mask reduces billable hours and accelerates editorial iterations—especially for teams that need fast, repeatable isolations for color grading, effects, or plate replacements.
Key takeaway
Expect big time savings in rotoscoping and tracking tasks when accuracy is good enough for final delivery.
Test it on real editorial footage—edge cases and motion blur remain the usual tradeoffs.
4. Firefly Custom Models vs Firefly Foundry
Next, the hosts unpack Firefly Foundry and the new Firefly Custom Models waitlist. Joey explains the difference clearly: Foundry feels enterprise-oriented, offering bespoke training on company assets and IP, while Custom Models looks like a self-serve option for creators to upload reference images and lock a visual style to their outputs.
The new custom model feature was compared to Midjourney moodboards or Runway presets: creators can upload a set of images and get a “style” token to call in future generations. The big advantage for filmmakers is consistency—maintaining a visual style across multiple assets or scenes without re-supplying references every time.
Key takeaway
Custom models speed creative iteration when a project needs consistent, repeatable visuals tied to a brand or personal likeness.
Foundry remains the enterprise path for deeper, bespoke model training at scale.
5. Project Graph: node-based AI editing
Joey highlights a node-based editor called Project Graph—likely connected to Adobe’s Invoke acquisition. This signals Adobe testing node-graph workflows for creative editing and AI chaining. Node-based systems let users build modular chains of operations (transform, denoise, stylize) that can be rerun and tweaked nonlinearly.
For filmmaking teams used to layer-based tools, node-based workflows offer repeatable procedural control and clearer versioning for complex VFX passes. The hosts warn that node UIs will require a learning curve for long-time Adobe users, but the payoff is more transparent, reproducible pipelines.
Key takeaway
Node-based editing is worth learning for procedural VFX and compositing tasks.
Expect gradual introduction inside Adobe products rather than an immediate workflow swap.
6. Layered image editing and automatic segmentation
One of the most immediately practical demos was layered image editing. A Firefly-backed feature automatically segments an imported image into layers, labels each region, and makes moving or editing parts intuitive. The hosts show a demo where moving chopsticks triggers generative fill to reconstruct the background.
For visual effects and virtual production, automatic layer separation means faster parallax plates, easier foreground edits, and accelerated matte generation for fast comping and set-dressing.
Key takeaway
Rotoscoping and manual mask creation will be reduced for many everyday tasks.
Generative fill that respects reconstructed backgrounds widens the scope of quick fixes in post.
Project Moonlight is Adobe’s experiment in automating social media asset generation. It analyzes existing brand assets and generates new posts that stay on-brand. The hosts flagged the competition—platforms and companies that already control the distribution layer—while recognizing the value for teams that need consistent on-brand content at scale.
Key takeaway
Brands and content teams can scale social content creation, but platform incumbents may absorb similar features.
Moonlight is useful for rapid publishing cycles where writers and designers need a first pass they can refine.
8. Firefly Video Editor and AI audio tools
Adobe previewed a web-based Firefly Video Editor private beta: a multitrack timeline for quick assembly, trimming, sequencing, and adding AI voiceover and soundtracks. Adobe also announced Firefly audio models for music and generate-speech text-to-speech, plus a partner model with ElevenLabs.
These features target mobile and social creators who need fast edits and good-sounding voiceovers without a full NLE. Joey compares the offering to tools like Descript and Premiere’s mobile apps: the real question is how much non-destructive control and precision a creator retains after AI edits.
Key takeaway
Good for quick social edits and first-cut assembly, but pros will test non-destructive control for finishing work.
Integration with Premiere (if it happens) will determine whether it becomes part of pro pipelines or remains a fast social tool.
9. Project Light Touch: relighting images
Project Light Touch showed relighting controls where spherical light sources are added to a still image to relight the scene. The hosts likened the demo to tools like Bull and flagged the high demand among VFX artists for easier relighting solutions.
Relighting simplifies look development for composites, virtual production plates, and concept art. If this extends to video, it could reduce set relight requirements or allow post relighting adjustments that today require complex HDRI setups.
Key takeaway
Relighting is one of those features that directly reduces production friction—fewer reshoots or complex HDRI setups.
Video-level relighting remains a tougher problem, but still a high-value area for R&D.
10. Project Frame Forward: alter start frame, keep motion
Project Frame Forward enables altering the start frame of a clip while preserving the original motion. The example shows inpainting to remove an object while keeping surrounding motion intact. The hosts caution that earlier tools can warp or degrade quality in processed frames, so output resolution and temporal fidelity are the validation points.
Key takeaway
Use this for small cleanups, set-dressing, and plate repairs if it preserves temporal fidelity and resolution.
Quality drop and warping remain the main concerns to validate in client work.
11. Project Surface Swap and Image to 3D
Surface Swap in Photoshop demonstrated automatic surface detection to replace or recolor complex objects like cars while preserving highlights and shadows. Meanwhile, Image-to-3D and Project Scene It showed Adobe exploring quick 3D conversions and interactive 3D navigation inside Firefly Boards.
The hosts emphasize that Adobe has pieces in 3D—Substance 3D and Mixamo acquisitions exist—but lacks an all-in-one 3D modeling/animation platform like Blender or Maya. The image-to-3D demos are useful for product shots and compositing, but mesh quality and riggability remain constraints for character work.
Key takeaway
Surface Swap is a practical speed tool for rapid look variations and ad or product shots.
Image-to-3D is useful for quick visualization but not yet a replacement for production-grade meshes and rigging.
12. Project Clean Slate: change dialogue by altering transcript
Clean Slate demonstrates changing a speaker's dialogue by editing the transcript. The hosts point out that this is similar to Descript and other tools, and the usual guardrails apply—voice authentication and permissions are critical to avoid misuse.
Key takeaway
Great for tightening voiceovers or cleaning up interviews; legal and ethical permissions matter for voice replication.
Most production teams will require authentication workflows before using synthetic voice in client work.
13. Frame.io: media intelligence and collaboration gaps
The final practical update is Frame.io’s Media Intelligence: semantic search across footage (dog, hat, etc.) and deeper Premiere integration. The hosts praise Frame.io’s camera-to-cloud and transfer reliability, but call out the still-frustrating integration between Premiere Team Projects and Frame.io proxies. They ask why a smoother sync between Frame.io cloud proxies and Premiere team workflows still isn’t seamless.
Key takeaway
Frame.io remains the backbone for remote review and camera-to-cloud delivery, but deeper Premiere sync would remove practical friction for edit teams.
Media intelligence will help producers and assistant editors find usable shots faster during tight turnarounds.
Adobe entering a practical AI maturity phase
The hosts close the episode with a clear view: Adobe is moving from flashy demos toward practical, under-the-hood AI that speeds everyday tasks. Firefly’s improvements, new audio tools, automatic segmentation, and relighting experiments point to a maturity phase where generative models are embedded to reduce repetitive work rather than just produce standalone creative outputs.
For filmmakers and creatives, the lesson is pragmatic. Test these tools where they remove manual labor—rotoscoping, segmentation, upscaling, plate fixes, and first-cut assembly. Keep an eye on model consistency, output resolution, and permissions for likeness and voice. And expect Adobe to continue a mix of research demos and productized features; not everything shown will ship immediately, but many pieces are already pushing into daily workflows.
Adobe MAX 2025 shows a clear direction: embedding generative models where they remove repetitive work and accelerate iteration. Filmmakers who experiment early will find the biggest gains in speed and cost management, while finishing teams should validate quality and fidelity before committing to deliverables.





