The latest episode of Denoised delivers three significant updates from the world of AI and creative technology: Google's substantial updates to their video generation capabilities and interface, Viggle AI's impressive real-time character transformation technology, and former Apple design chief Jony Ive's move to OpenAI. Let's explore what these developments mean for filmmakers and content creators navigating the rapidly evolving landscape of AI-powered production tools.
Google made several major announcements at their recent I/O conference, with two developments standing out in particular for creative professionals: Veo 3, their latest video generation model, and Flow, a new platform that integrates Google's creative AI tools.
Veo 3 represents a significant step forward in text-to-video generation, producing notably photorealistic results that hosts Joey and Addy describe as some of the most impressive in the current market. Unlike previous iterations, Veo 3 can generate both video and matching audioâincluding speech and sound effectsâin a single generation process.
The new Flow platform serves as a centralized interface for accessing Google's AI tools, including both Veo 2 and Veo 3. Flow provides:
A sleek, intuitive interface for managing generated content
Basic video editing capabilities for extending, trimming, and reordering clips
Direct access to Google's AI models without navigating through multiple services
However, premium access comes at a substantial cost. While Google AI Pro ($20/month) provides access to some tools including Veo 2, accessing Veo 3 requires the Google AI Studio subscription at $250/month. This subscription includes 30 terabytes of storage and YouTube Premium, but represents a significant investment compared to competing services.
The per-generation cost structure is also noteworthy:
Veo 3 costs approximately 50 cents per second of generated video
With audio, that increases to 75 cents per second
For comparison, Runway Gen-4 runs at about 12-16 cents per second
Despite the price tag, Flow offers practical applications for content creators. The hosts highlighted its potential value for YouTube creators needing high-quality B-roll or specific visual elements without extensive production resources.
"For $250 a month it's totally worth it," notes Addy, pointing to how it compares favorably to hiring dedicated motion graphics professionals for similar work.
One of Flow's most useful features allows users to trim a generated clip and re-prompt the system to continue the video in a new direction, maintaining consistency with the retained portion. This provides more control over the sometimes unpredictable nature of AI-generated video.
The hosts also noted Google's introduction of SynthID, their approach to embedding identification markers within AI-generated media. This serves as a verification tool, potentially helping address concerns about distinguishing between real and AI-generated content in an era of increasingly convincing synthetic media.
While Google's announcements captured most of the week's attention, the hosts were particularly excited about Viggle AI, a new tool enabling real-time character transformation using only a standard webcam or video feed.
Unlike traditional methods requiring specialized hardware, Viggle AI transforms a person's appearance into different characters instantaneously without motion capture suits, facial markers, or other technical equipment.
"Real-time AI is going to be such a huge frontier," Addy emphasizes. "I'm more excited about that than Veo 3."
This development represents a significant shift from the complex and cumbersome workflows that previously made digital character creation inaccessible to most content creators. The hosts compared Viggle's approach to the elaborate setup required by successful virtual streamers like CodeMiko, who uses professional-grade motion capture suits that require significant technical knowledge to operate.
Some key implications for creators:
Streamlined Creator Workflows: What once required extensive technical setup can now be accomplished with a standard camera setup
Expanded Access: The barrier to entry for creating character-based content has been dramatically lowered
Production Flexibility: Creators no longer need to manage specialized equipment or deal with the limitations of wearing tracking suits for extended periods
The hosts also highlighted potential applications beyond streaming, including ideation and concept development for animated films, where rapid visualization of character performances could speed up the creative process.
This development aligns with what Joey described as an important shift in the AI landscape: "This is the exact flip of what I've just been complaining about, where it's just like it takes too long... This is the exact opposite where I think you can see it and it's happening in real time. That's the future."
In a move that signals OpenAI's potential expansion beyond software, former Apple Chief Design Officer Jony Ive and his design firm, LoveFrom, have been brought into OpenAI through a reported $6.5 billion arrangement.
Ive, known for his influential work designing iconic Apple products including the iPhone, iPad, and various MacBook generations, brings world-class hardware and interface design expertise to the AI company. The hosts speculated this likely indicates OpenAI's intention to develop physical products that embody their AI technology.
"I highly doubt this is about the money," Joey notes. "This is about the next challenge, the next thing to build for whatever an AI future device looks like."
The hosts discussed several possibilities for what an OpenAI hardware product might entail:
AR Glasses: Following the industry trend toward wearable displays, potentially with built-in cameras for visual AI processing
Audio Interfaces: Specialized earphones or bone-conduction audio devices that provide continuous AI assistance
Phone-Like Devices: Possibly focused less on the hardware itself and more on reimagining the operating system and application layer for AI-centric interactions
Addy suggested Ive's greatest contribution may come through interface design rather than hardware innovation: "I think the magic that Jony Ive will bring will be in the operating system and the application layer, not necessarily the hardware itself, because if you use ChatGPT every day, it's still kind of clunky."
The hosts agreed that current AI interfaces leave much room for improvement, from chat history management to natural interaction modelsâprecisely the kind of design challenges Ive built his reputation addressing at Apple.
Throughout the episode, Joey and Addy reflected on how these developments fit into broader trends in AI-powered content creation. Joey observed that while model quality continues to improve rapidly, the fundamental workflows remain similar: "The models are better. Stuff looks more realistic, but the workflows themselves of text-to-image, image-to-video is not evolving as much as the quality of the models."
This observation came from Joey's recent experience filming season two of Cinema Synthetica, a 48-hour AI filmmaking competition. Despite having access to significantly better models than last year's competition, teams still faced similar workflow challenges, particularly around maintaining creative control when moving from image generation to video generation.
Addy emphasized the importance of creators adapting to this changing landscape: "We're the transitional generation... Movie making 30 years from now, who knows what that'll look like. But for us, we come from the world of computer-generated graphics and VFX and all the tools to now transitioning to these things, and then we'll pass the torch to the future generation."
The hosts agreed that learning the fundamental workflows of AI-assisted creation provides future-proofing value, as these processes remain relatively consistent even as the underlying models improve: "Once you kind of figure out a workflow that works for you, once there's a new model or something, it's like you can swap the model out... but the workflow hasn't really changed that much."
The developments covered in this episode of Denoised highlight the continuing acceleration of AI capabilities in the creative space. Google's Veo 3 and Flow platform offer increasingly photorealistic video generation but at premium prices. Viggle AI demonstrates how real-time transformation can eliminate technical barriers that previously limited character-based content creation. And Jony Ive's move to OpenAI suggests we may soon see purpose-built hardware designed specifically for AI interaction.
For creative professionals, these tools represent both opportunities and challenges. The quality of AI-generated content continues to improve, but workflows still require refinement for seamless integration into professional production pipelines. As Joey and Addy note, understanding these tools and their limitationsâregardless of whether you choose to incorporate them into your workâhas become an essential part of staying informed in today's creative technology landscape.
Reply