In this episode of Denoised, hosts Addy Ghani and Joey Daoud explore three significant developments reshaping media technology: the inaugural RenderCon event, Higgsfield's promising new AI video model with enhanced cinematic controls, and Vimeo's latest offering that allows creators to build their own streaming platforms. The episode provides valuable insights for professionals navigating the rapidly evolving landscape of rendering technology, AI-generated video, and content distribution strategies.
RenderCon, hosted by OTOY founder Jules Urbach, marked the first in-person conference for the Render Network community. Joey, who attended the event, described it as exceptionally well-organized and professionally executed at Nya Studios in Hollywood.
The conference brought together key players in computer graphics, including NVIDIA, OTOY, and notable figures like Beeple and members of Corridor Crew. As Addy noted, "It seems like the people organizing RenderCon are well versed in organizing enterprise trade shows... the funding was clearly there."
For those unfamiliar with the host organizations, OTOY is highly respected in the computer graphics and VFX communities. As Addy explained, "Long before Unity and Unreal were really big players in our space in M&E, it was really the OG renderers - it was Octane, which is an OTOY product, V-Ray, Arnold... these were traditional path tracing renderers."
The Render Network itself serves as a distributed rendering solutionâeffectively GPU power on demandâallowing creators to access massive rendering capabilities without building their own render farms. This proves invaluable for projects requiring extensive computational resources, such as Alex Pearce's Sim-Plates for virtual production.
A recurring theme throughout the conference was the transition from traditional rendering to neural rendering, with discussions about how AI could transform rendering processes:
Traditional path tracing remains CPU/GPU intensive with many hours required for high-quality frames
Neural rendering offers potential efficiencies in simulating path tracing or creating shaders on the fly
Industry professionals consistently use computational gains to add more complexity rather than simply reducing render times
One of the conference highlights was a Star Trek-themed project that showcased OTOY's face-replacement technology, creating a short film featuring digitally recreated versions of William Shatner as Kirk and Leonard Nimoy as Spock. The project, created with input from Rod Roddenberry (Gene Roddenberry's son), demonstrated OTOY's real-time face-altering solutionâsimilar to what companies like Metaphysic AI are developing.
The process involved:
Building physical clay models based on existing images
Scanning these models using 360-degree photogrammetry rigs
Creating digital versions that could be mapped onto actors' performances
This production raised important questions about digital likeness rights, estate management, and the future of actor representation. Super agent Ari Emanuel, an investor in OTOY, participated in discussions about how these technologies might reshape talent representation and monetization opportunities for actors' digital likenesses.
The second major topic covered was Higgsfield, a new AI video generation model that stands out in an increasingly crowded field of text-to-video solutions. What makes this particular model noteworthy is its architecture and focus on cinematic quality.
Developed by a team led by Alex Mashrabov (former AI lead at Snap who previously sold AI Factory for $166 million), Higgsfield appears specifically designed for filmmakers and visual storytellers.
The model's key technical characteristics include:
A hybrid "diffusion transformer" architecture combining elements of both diffusion models (like Stable Diffusion) and transformer models (like those used in ChatGPT)
Training that emphasizes cinematic lighting and camera movements
49 different pre-built camera shots that users can easily implement
Addy speculated that this hybrid architecture might be responsible for the model's improved comprehension of prompts: "When you combine it into hybrid architecture... the prompting will give you a much more accurate result because it's just so tied into LLM transformers underneathâit's better able to understand the direction and the language."
A particularly impressive recent addition called Pulse focuses on realistic human movement patterns rather than just camera movements. Unlike other models that might struggle with complex physical interactions, Higgsfield has been trained on specific movement patterns including:
Baseball swings
Skateboard tricks (glides and ollies)
Skiing motions (carving and powder skiing)
Basketball dunks
This represents a significant advancement over previous models that often struggled with portraying natural human movements and object interactions. As Joey noted, "This is the first one I've seen where it has programming or training specifically for human movement."
The model appears to address longstanding issues with AI video generation, including the infamous "button press test" that Joey had previously found impossible to achieve with any AI modelâwhere systems consistently failed to accurately show a finger pressing a button.
The final segment explored Vimeo's new product offering called Vimeo Streaming, which essentially provides creators with the backend infrastructure to launch personal streaming platforms similar to Netflix.
This service, which appears to be a rebranding or evolution of Vimeo's previous OTT (Over The Top) offering, handles the complex technical aspects of streaming platform creation:
Content Delivery Network (CDN) infrastructure
Cloud storage
Adaptive streaming based on connection quality
Platform development across multiple devices
As Addy explained: "The part here that's the hardest is the CDN... Every major player in this game has a very robust CDN. YouTube has their own, Netflix has their own, Verizon has their own, Vimeo has their own. So you're just leveraging this powerful CDN for your own version of Netflix."
The hosts discussed who might benefit most from such a service, noting several key considerations:
Creators need a substantial existing audience to convert 1-2% to paying subscribers
Initial setup costs are likely significant (historical pricing for Vimeo OTT started around $500-1000 monthly plus setup fees)
Content needs to be valuable enough to justify subscription costs
While individual creators might struggle to make the economics work, the hosts identified several promising use cases:
Collectives of creators pooling their content (similar to Nebula)
Niche content producers with dedicated audiences (like yoga instructors)
Podcast networks bundling multiple shows
Specialized content providers for environments like hotel lobbies, gas stations, or doctor's waiting rooms
The service aligns with the growing trend of viewers consuming streaming content on televisions rather than computers or mobile devices, potentially offering creators a more premium placement for their work.
This episode of Denoised provides a thorough examination of three technologies reshaping media production and distribution. From the future of rendering at RenderCon to Higgsfield's advancements in AI video generation and Vimeo's creator-focused streaming platform, these developments represent significant opportunities for professionals throughout the entertainment industry.
Each technology addresses different aspects of content creationâfrom the technical foundation of visual effects to the emerging possibilities of AI-generated video to new methods of content distribution. For media professionals looking to stay ahead of industry trends, understanding these tools and their implications will be essential for future success.
Reply