Three influential leaders in technology and entertainment gathered to discuss how AI is reshaping the film production landscape.
The "AI at the Frontier of Science, Art and Society" panel, part of the first-ever RenderCon held on Tuesday at Nya Studios in Hollywood, featured Jules Urbach, founder and CEO of OTOY and creator of the Render Network; Richard Kerris, VP and General Manager of Media & Entertainment at NVIDIA; and Emad Mostaque, founder of Intelligent Internet and former CEO of Stability AI, known for pioneering Stable Diffusion.
As major studios and independent creators alike grapple with the rapid advancement of AI technologies, this conversation revealed both the exciting possibilities and crucial challenges facing the industry.
The panel explored how AI is transforming creative industries, with particular focus on rendering technologies, decentralized computing, and the evolving relationship between human creators and artificial intelligence.
As the Render Network continues to develop its decentralized GPU rendering platform, this discussion highlighted both the technical innovations and philosophical questions facing the industry.
The conversation began with a bold prediction about the future of rendering technology, with Mostaque asserting that fully 4K AI-generated games could be possible within a year. He highlighted how NVIDIA's new DLSS upscaler on the 5090 series GPUs already means that "90% of the pixels that you see on the screen are rendered" through AI upscaling from lower resolutions.
"A year goes by very quick," Mostaque noted, pointing to the rapid advancement of generative AI models like Runway ML's Gen-4 and Higgsfield. He described the progression from simple content creation to more sophisticated control and composition capabilities, suggesting we're approaching an era where "real-time video games or holodeck type experiences" will become possible.
Urbach questioned whether neural rendering would completely replace traditional ray tracing and 3D rendering, asking if AI models have enough context to ensure nothing is lost when every asset exists as a 3D object. The panelists agreed that the immediate future likely involves hybrid approaches.
"I think that's where the future is likely to be—a combination of these two," Mostaque explained. "Because the control factors of the AI models aren't quite there." He suggested that the approach would depend on the level of accuracy needed, with different solutions ranging from high-end data center processing to consumer-grade experiences on home devices.
Richard Kerris emphasized that the future of AI in entertainment will focus on personalization and real-time capabilities. He predicted that "we're going to see a lot more personalization, a lot more capability of having things done in real time," suggesting that AI will become a tool that people use to tell stories rather than replacing storytellers.
You'll have hybridized experiences. You don't need to render in full. You can have all the logic of world interactions done in the cloud on the physics-based engines, the world models, but then it transports in very low resolution and then gets upscaled but intelligently on your edge devices.
Urbach raised concerns about memory limitations and hallucinations in current AI models, noting that "all these world models that we're seeing have some of the same problems with video models, which is if something goes out of frame and it forgets it." He suggested that an intermediate step might involve rendering in 3D with neural rendering applied on top.
Mostaque predicted that future systems would use "massive personalized models on the edge that then interact with these big world models in the cloud and layers in between to have that personalization and that consistency."
When discussing AI's role in Hollywood, Urbach noted the controversy surrounding a Paramount presentation at South by Southwest that drew criticism for its embrace of AI.
People are using AI everywhere in production. They just don't talk about it.
Kerris emphasized that AI has been part of the film industry for decades, pointing to technologies like digital aging and digital twins. He attributed recent backlash to misunderstandings about AI's role: "When people come out and they say this is a bad thing, they're saying it because it's from a point of somewhat being ignorant to what the tool is actually meant to do. It's meant to help you tell a story."
The discussion highlighted how studios have become reluctant to publicly discuss their AI usage due to fear of protests, despite long-standing implementation. Kerris praised companies like Runway for focusing on artistic outcomes rather than technology for its own sake: "What I like about them so much is that they're about the art. They're not about the technologies that means to their art."
As head of Media & Entertainment at NVIDIA, Kerris described the company's vision for filmmaking as enabling more people to tell stories through initiatives like media, which aims to bring "AI to all facets of the media workflows, not just the stuff you see on the screen, but how can we help it be more cost effective, more distributed, more accessible."
A fascinating thread throughout the discussion was the relationship between AI-generated content and human creativity. While AI can produce visually impressive results, the panelists acknowledged that human judgment remains essential.
Mostaque noted that the progression in AI has moved from simple creation to more sophisticated control and composition, but emphasized that human agency remains crucial: "I think a key part of all of this is a question of agency." He described how early AI tools felt disempowering because "it was just so instant, the feedback, you didn't feel like you had any control over any element."
Kerris compared AI tools to musical instruments, noting that just because a keyboard can play a demo doesn't mean it creates meaningful music: "There's still the taste, there's still the what are you after with it. Just because it can create these images and it can do all of those things, if you don't have a story to tell, it's not going to make any difference."
The distinction between content and art emerged as a key theme. "Are you creating art or are you creating content?" Mostaque asked, drawing parallels to manufactured music groups versus true musicians. "Sometimes you need McDonald's, sometimes you need to have high-end cuisine. That's the range."
The panel explored the tension between open and closed source AI models, with Mostaque—who famously open-sourced Stable Diffusion—offering particular insight. "I think you need to have complementary things," he explained, noting that "you can always take an open model, add closed data, and then it'll be better."
Sometimes I want to control my AI, sometimes I don't care. And again, it just depends on the level of complexity I want there.
Mostaque envisioned a future ecosystem with multiple tiers of AI models: "The big guys will always have these super experts. Then you've got your personalized intelligence, which is your Apple intelligence, your Google intelligence, NVIDIA intelligence, maybe at the home. And then you've got the models for education, healthcare, government."
He emphasized the importance of choice and control: "Sometimes I want to control my AI, sometimes I don't care. And again, it just depends on the level of complexity I want there."
Kerris added that closed models might be necessary in certain contexts, using a music example: "You may want to jam with Eddie Van Halen, but you don't want Eddie Van Halen to create new music. And the estate... you can jam with him all you want, but he's never going to create anything that's brand new because he's not with us."
The discussion highlighted how both approaches serve different purposes, with Mostaque concluding: "Just map it to the workforce. These models are graduates and they're experts. And sometimes you hire in and sometimes you have people working for you."
Looking at the near future of AI agents, Kerris predicted they would help users learn complex software: "Imagine you sit down to learn a piece of software that can be complicated... You have an AI agent that's alongside and you can say, 'I'm trying to do this, help me understand how to do that.' And it walks you through it, gives you education."
Mostaque expanded this vision beyond simple assistance to transforming creative workflows: "I think the real agent breakthrough over the next year will be your workforce, your team." He explained how AI could enable asynchronous work: "Just being able to message and say, 'hey, have you got this design work done and improve this,' you don't know if it's a human or an AI on the other side will enable you to scale your capabilities."
He noted that current creative processes with AI can feel too immediate: "Right now the creative process is a bit too quick in some ways. Like we can generate a song in five seconds, but it takes four minutes to listen to and four hours to give all the feedback."
Kerris shared an example of an AI "bandmate" that provides musical accompaniment, adjusting to the player's timing and style. This technology could eventually incorporate the styles of famous musicians, allowing users to jam with AI versions of their idols. "Their belief is that people will then stick to their instrument and learn a lot more because they're inspired by it," he explained.
When asked how AI would change our lives in the next five years, Kerris predicted AI would become "as commonplace as the robotic vacuums that you have," handling various tasks and giving people more time for family and passions. "It gives you more time as a person to do things that you want to do," he explained.
Mostaque offered a more transformative vision, suggesting AI would enable previously impossible capabilities: "Things that could never have been done before because we didn't scale, weren't able to scale intelligence, you won't be able to scale expertise suddenly become available." He described AI cancer support systems that "outperforms human doctors in empathy" and would be available in every language.
The panelists discussed how AI would transform education, with Mostaque citing studies where "two months of ChatGPT, they got two years of education advancement." He explained that AI tutoring could provide the "Bloom 2 sigma effect—a 2 sigma improvement with one-to-one tuition. That's what's coming for our kids."
Both speakers emphasized that AI would augment rather than replace human capabilities. "I think that humans like this connection," Mostaque noted, suggesting future jobs might involve creating "entire holodeck experiences" that bring people together. Kerris added that AI would inspire more learning by making subjects more engaging and accessible: "If it is something that they can always interact with and it becomes something that they just love to work on, then they become an expert in it."
The panel concluded with a shared optimism about AI's potential to benefit humanity when developed responsibly, with Mostaque stating: "We're at the most exciting time in history. It's up to us to make sure it benefits everyone."
Reply