In this episode of Denoised, hosts Addy Ghani and Joey Daoud dive into three significant developments at the intersection of AI and media production: Duolingo's strategic pivot to becoming an "AI-first" company, Runway's latest Gen-4 model capabilities, and the potential of Anthropic's Model Context Protocol for creative applications. Let's explore how these developments are reshaping workflows and what they mean for creative professionals.
Language learning platform Duolingo has announced a significant strategic shift, positioning itself as an "AI-first" company. In an internal email shared with employees, Duolingo leadership outlined their vision for integrating AI throughout their operations.
The company's message was clear about the scope of this transition: "Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won't get us there. In many cases, we'll need to start from scratch."
Perhaps most notably, the company stated they "will gradually stop using contractors to do work that AI can handle" â a point that has generated significant discussion in the tech community. While Duolingo emphasized their commitment to supporting employees through this transition with training and tooling for AI, the announcement raises important questions about the future of work.
The hosts discuss several key aspects of this transition:
Code base integration: Duolingo mentioned that getting AI to understand their code base will take time, which raised questions about entrusting core business infrastructure to AI systems.
Beyond coding: While code generation is an obvious application, the hosts speculate about what other contractor roles might be replaced, from marketing to asset generation.
Industry context: This move comes at a time when many tech companies are seeking efficiencies amid economic pressures and rising labor costs.
For professionals concerned about the implications, Joey suggests proactive strategies: "How do you shield up against this? I feel like it's on you too. You gotta dig into all the AI tools... either harness these tools and become the solopreneur who is able to create a big company, or make yourself a more valuable contractor by being the AI contractor who can come in and harness these tools."
Addy agrees, advising contractors to "be at the forefront of the thing that I'm good at and try to use the best tools to increase my gains and my efficiency."
Interestingly, the hosts note that Duolingo is already well-known for its innovative marketing, particularly its viral TikTok presence centered around the Duolingo Owl character. Joey recounts a recent campaign where "the Duolingo owl died, was hit by a Tesla Cybertruck. And it was a whole week-long story that took over the internet of the owl dying and then was resurrected again and came back."
This kind of creative marketing raises questions about whether AI could generate such culturally resonant campaigns, or if it would more likely serve as a tool for creative teams.
Duolingo is positioning AI as central to its future operations
The company plans to reduce reliance on contractors for AI-automatable tasks
For professionals, becoming adept with AI tools may be essential for future-proofing careers
Creative marketing and content may remain areas where human creativity still offers distinct advantages
Runway, known for its advanced video generation capabilities, continues to push the boundaries with its Gen-4 model. During their recent Gen:48 competition (a 48-hour filmmaking challenge), Runway teased upcoming features that suggest significant advances in AI-generated visual content.
The first notable development is the "References" feature, which allows users to provide up to three reference imagesâtypically of a person, object, and placeâalong with text direction to create highly consistent image outputs.
As Joey explains: "The interesting thing with Runway is it is built on their Gen-4 model... and their Gen-4 model is really, really good. I've used it for turning image to video. Really good real-world understanding, physics, all that looks great."
While reference-based generation exists in other tools like Pika, what sets Runway's implementation apart is the exceptional consistency in elements like character appearance across different generated imagesâsomething that has been a persistent challenge in AI image generation.
Even more intriguing was a demonstration video shared by Runway CEO CristĂłbal Valenzuela showing what appears to be a voice-controlled real-time generation interface. Joey describes the demo: "Video is him narrating in real time, like a video generation and basically Holodeck voice commanding. 'Let's pan down from the sky. Oh, we're in a field. Okay, let's see the field. All right, let's add some city blocks here.' And then city blocks drop from the sky."
This demonstration suggests a future direction for creative AI tools where:
Generation happens in real-time rather than with waiting periods
The world can be modified dynamically through voice commands
Elements can be added, removed, or modified in an existing scene rather than regenerating from scratch
Addy compares this to how directors typically work with VFX artists in a dailies room: "This is exactly what directors do to VFX artists that sit there. It's like, 'bring in the building, move that tree, bring in a bicycle.'"
However, both hosts acknowledge that professional users will likely still need precise control through traditional interfaces. As Addy notes, "Any artist will tell you their best work is done on a keyboard and mouse... an artist would actually want to place it exactly where it needs to be, rotated, scaled."
The References feature promises greater consistency in AI-generated imagery
Real-time voice control points to more fluid creative workflows
These advances could streamline early creative exploration and concept development
For professional applications, precise control will remain essential alongside conversational interfaces
The final topic explores Anthropic's Model Context Protocol (MCP), which enables "secure two-way connections between data sources and AI-powered tools." In practical terms, this creates a pathway for AI assistants like Claude to directly control software applications.
Joey highlights recent demonstrations showing Claude controlling Blender (a 3D modeling software) and DaVinci Resolve (a video editing platform). In these demos, users simply have a conversation with Claude, which then executes the appropriate commands in the software.
This raises interesting questions about how AI will integrate with professional creative tools:
Will AI assistants like Claude sit on top of existing software (external integration)?
Or will AI capabilities be built directly into professional tools (native integration)?
How will this affect the economics of software licensing and AI access?
Addy frames the issue through a practical example: "Let's take a very simple task... you're making a movie and you have five primary characters, and those characters need to come in, get imported into your scene. The rig needs to be loaded, and all the automated stuff that needs to happen for an artist to start working."
Traditionally, this might require a technical director to write scriptsâa process that could take days. With Claude connected to Maya (a professional 3D animation software), users could potentially just ask the AI to handle this setup.
This raises questions about who captures the value in such scenarios. As Addy asks, "Is that studio paying Autodesk for Maya? Is that studio paying Anthropic, or is it paying both?"
Joey suggests that while the token costs for such operations might be minimal for individual users, the real opportunity may be in broader integration: "I think the move is for Anthropic to go to Autodesk, sell a bulk license of Claude to every Maya license. So the next version of Maya that you download on your desktop will have built-in Claude in it."
Both hosts agree that professional tools like Maya are unlikely to be displaced, but rather enhanced by AI integration: "I don't see Maya being utilized less by the film and TV community regardless of how much AI is adopted... we're still gonna need a 3D environment with a robust set of tools."
MCP enables AI assistants to directly control software through APIs
This opens possibilities for conversational interfaces to professional tools
Success will likely come from seamless integration rather than standalone AI solutions
The most effective implementations will be those where users "don't even realize" AI is working in the background
This episode of Denoised highlights how AI is becoming increasingly integrated into creative workflowsâfrom company-wide strategies at Duolingo to specific creative tools like Runway Gen-4 and API-level integrations through protocols like MCP.
For creative professionals, these developments suggest both opportunities and challenges. While certain tasks may become automated, those who can effectively leverage these new AI capabilities may find themselves more productive and valuable than ever before.
As Addy notes in the closing segments, the companies that succeed in this space will be those that make AI adoption "as frictionless as possible" with "the most robust, easiest-to-use API... the best cloud platform... the safest, most secure thing."
The future appears to be one where AI becomes an invisible but essential part of creative workflows: "Eventually the point where you don't even realize that Claude or whatever's happening in the background, it's just doing it."
Reply