Adobe launched a Foundry service that promises custom model training for enterprises, and immediately backed it with a tactical acquisition. In this episode of Denoised, hosts Addy and Joey break down Adobe Foundry and the Invoke buy, Runway's move to offer full-weight model training and model fine-tuning as a service, and how Amazon's House of David used AI in 253 shots to compress production time and cost.
Adobe AI Foundry and the Firefly context
Adobe positioned Firefly early as a commercially safe generative model by training on licensed assets only. That approach solved a legal problem but created a quality trade-off. Licensed datasets such as stock photo libraries are relatively small compared with models trained on massive web scrapes, so commercial-safe models can underperform on fine-grain, real-world distinctions.
Foundry is Adobe's answer to that trade-off. It is presented as an enterprise service where Adobe will help organizations train a custom model tuned to their brand, product photography, or creative requirements. The service appears targeted at large customers and is likely to carry enterprise pricing. The practical pitch is simple: if Firefly alone was not hitting the fidelity bar brands demand for ads and commerce, Foundry will let brands get closer by training models on their own image libraries and visual assets.
Key notes on Foundry and Firefly:
Firefly was built on licensed data to be commercially safe, but that limited its world knowledge compared to models trained on web-scale data.
Foundry is framed as a custom-model training service for enterprise clients. Expect hands-on work and meaningful cost for that level of customization.
Adobe has begun opening the Firefly ecosystem to third-party image and video models to regain market share in areas like e-commerce and social creative that value turnaround and cost.
Why Adobe acquired Invoke
Invoke is a model-agnostic, node-based browser workflow that supports multiple models and sequential node graphs. That node-based approach is less technically jarring than legacy compositing node trees and is friendlier to creatives who use Lightroom or Photoshop but who want more control than a single prompt box provides.
What the acquisition delivers to Adobe:
Node-based workflows that make model chaining and repeated inference accessible in a web environment.
Model-agnostic support, which shortens the path for Adobe to integrate multiple third-party models into Firefly and related web products like Adobe Express and Firefly Boards.
Talent and product IP that can be folded into an automated enterprise Foundry pipeline or offered as a developer-facing tool inside Adobe's ecosystem.
Invoke's current hosted service is being shut down ahead of integration, so existing users will need an interim plan. Local hosting options were part of Invoke's offering, and whether those will persist under Adobe is an open question.
Runway introduces full-weight training and model fine-tuning
Runway announced full-weight training and model fine-tuning hooks that let customers take an existing foundational model and retrain the entire model weights for a new output domain. Instead of building a foundational model from scratch, customers can pivot a strong base model to output very different visual styles or subject matter.
Why this matters:
Full-weight training lets a model trained primarily on cartoons be repurposed to output live-action imagery given sufficient training footage.
This is still a resource intensive and enterprise-oriented capability, not a consumer self-service feature. Expect significant compute, data, and cost.
Runway is signaling a strategic pivot: take the world model they have and apply it to industries that can pay for vertical solutions, such as robotics, architecture, life sciences, and autonomous systems.
The practical implication is clear: companies with domain-specific needs can buy a customized variant of a leading model rather than training from scratch, compressing time to market and widening commercial applications.
Robots, training data, and marketplaces
Several examples highlight the interplay between video models and robotics. To teach a robot to fold laundry, a company needs a long tail of labeled video clips. That creates demand for specialty data marketplaces and shot-sourcing businesses that offer curated training footage, much like a stock footage library but for AI training.
Expect a growing market for curated video datasets and marketplaces that sell task-specific training clips.
Model fine-tunes and specialty model variants will themselves become products sold on marketplaces to robot integrators and industrial customers.
'House of David' used 253 AI shots — a virtual production case study
Amazon's House of David used generative AI in 253 shots across season 2, with approaches that ranged from fully AI-generated sequences to hybrid LED wall setups and composited enhancements. That scale illustrates the point where AI moves from novelty to production tool in episodic workflows.
How the team approached it:
Some shots were fully generative AI. Others used a small LED wall for backgrounds generated by AI, then filmed live actors against those plates.
Many shots required hybrid workflows: a structural blockout built in Unreal or another tool, followed by style transfer and AI-driven photorealism to achieve final pixels.
Post production and human intervention remain essential. AI accelerated the timeline, but compositing, color grading, and manual cleanup were still required.
Unreal plus AI style transfers: compressing weeks into days
Virtual production teams described a workflow where structural environments are blocked in Unreal in about a week, then AI style transfers add the photo realism that previously required 10 to 12 weeks and much higher budgets. The combination is pragmatic: keep Unreal for parallax and camera logic, then use generative AI to push textures and lighting to final pixel.
Numbers mentioned on the panel offer perspective:
Traditional environment builds: 10 to 12 weeks and $15,000 to $200,000, depending on scope.
Hybrid approach: structural bones in Unreal in roughly a week, then AI style transfer to add photoreal finishes and shorten delivery time.
For certain genres and period pieces, artists can still hide imperfections. Fantasy, biblical settings, and other worlds that do not exist today are easier targets because audiences accept plausible-looking composites more readily than modern, hyper-familiar environments.
LA Tech Week and Promise: studios reorganizing around AI
LA Tech Week surfaced several studios and startups reorganizing to build AI-first production capabilities. Promise is one such company focused on film and TV workflows. The company hosted a panel where Albert Cheng, head of Amazon AI Studios, discussed studio-level adoption and strategy.
Other notes:
Amazon is creating internal AI-first units to explore AI-enhanced storytelling. The approach emphasizes that good storytelling should not be framed as a separate category purely because AI was used.
Promise also launched a services arm called The Generation Company focused on VFX services that use AI in the pipeline, indicating demand for shops that combine AI expertise with production experience.
What this means for filmmakers, VFX artists, and studio leaders
The headlines point to a few practical realities for production teams evaluating AI:
Quality bar remains high for film and episodic work. Native film formats like 16-bit EXRs, high-dynamic-range color, and compositing precision are still challenging for many off-the-shelf models.
Enterprise services are becoming the norm for production-grade AI. Foundry-style custom training and Runway full-weight fine-tuning are built for companies with deep asset libraries and budgets.
Hybrid workflows shorten timelines. Use Unreal for camera and parallax, then apply AI style transfers to push to final pixel faster and cheaper than traditional full-render pipelines in many use cases.
Human-in-the-loop remains essential. AI speeds iteration but does not replace compositors, colorists, VFX supervisors, or legal clearance teams.
New marketplaces will form. Expect fine-tuned model variants, task-specific datasets, and model-driven plugins to be sold to studios and integrators.
Practical takeaways and next steps
Experiment in lower-risk projects first: e-commerce product shots, social content, and vertical ads are prime testing grounds before adopting AI at scale for episodic projects.
Map credits to dollars. Subscription credit systems differ across platforms. Track how each provider converts credits to generation cost to compare value effectively.
Start curating your asset library. Custom model training requires clean, well-labeled images and reference material. Investing in a disciplined asset pipeline pays off when enterprises offer Foundry-style training.
Learn node-based, model-agnostic tools. Node editors that chain models and inference steps will be common in creative workflows; mastering them accelerates adoption.
Consult legal counsel early. Clearance and rights remain a moving target. Studios that pre-clear inputs and focus on output-level risk assessment will move faster.
Final note
Adobe's Foundry and the Invoke acquisition, Runway's fine-tuning service, and production examples like House of David show a market moving from experimentation to enterprise-grade workflows. The work is not finished: integration, tooling, and legal frameworks will continue to evolve. For filmmakers and creative teams, the next 12 to 18 months will be a period to learn, test, and build pipelines that combine the speed of generative AI with the craft of human-directed storytelling.





