• VP Land
  • Posts
  • Runway AI Tips, Creator Economy Studios, and Parallel AI Processing

Runway AI Tips, Creator Economy Studios, and Parallel AI Processing

In the latest episode of Denoised, hosts Joey Daoud and Addy Ghani dive into three important developments at the intersection of technology and media. From new techniques for controlling AI image outputs to the rise of YouTube creator-built studios rivaling traditional networks, this episode covers practical insights for creative professionals navigating the rapidly evolving landscape.

Runway References & ChatGPT Images: New Control Methods for AI Image Creation

Joey and Addy begin by analyzing Runway's References feature, which is giving creators significantly more control over AI-generated outputs. The feature allows users to provide reference images that guide the AI's composition and framing.

As Joey explains, "The main thing is control. And this is like control over your AI output of what you want and making sure the AI follows it."

Runway CEO Cristóbal Valenzuela has been showcasing this capability by feeding crude sketches and diagrams into the system:

  • Users can create simple Microsoft Paint-style layouts with boxes representing elements

  • Label these elements to indicate characters or objects

  • The AI will then honor this layout when generating the final image

Joey tested this approach by creating physical compositions with building blocks as a kind of 3D storyboard, photographing them, and then feeding those references to AI tools:

  • His test with Runway produced mixed results, with the AI not fully adhering to the intended composition

  • When testing with ChatGPT's image generation (which includes an interactive chat interface), he found better results through the ability to refine and adjust through conversation

The hosts noted that while these tools aren't yet production-ready for final images, they're extremely useful for:

  • Concept visualization

  • Storyboarding

  • Communicating shot ideas to cinematographers

  • Rapid iteration on visual concepts

Addy emphasized the improvement over traditional methods: "A DP will take this over a hands sketch... crappy stick figure thing."

Other techniques discussed include:

  • Using overhead layout drawings to help the AI understand spatial relationships

  • Creating multi-quadrant video outputs in a single Runway generation for consistency

  • Controlling seed values to maintain consistency across generations

  • Potentially using AI for time-lapse creation without the traditional technical headaches

This marks an important shift in AI image creation from random generation to precise, intentional composition that aligns with a creator's vision.

Creator Economy Building the New Studios

The hosts discussed a notable Hollywood Reporter story about YouTube creators building substantial production facilities that rival traditional studios. This trend represents a significant shift in where content production infrastructure is developing.

"These YouTube creators are building full-on sets and sound stages and back lots and stuff for their videos," Joey explains.

Several major creators were highlighted:

  • Dhar Mann (25 million subscribers) has built significant production capacity

  • Alan Chow (Alan's Universe) is developing studio space focused on Nickelodeon-style content

  • Dude Perfect is constructing a $100 million studio in Dallas with potential virtual production capabilities

  • MrBeast has established extensive production facilities in Greenville, North Carolina

The hosts noted how these creator-driven operations are attracting traditional media talent:

  • Alan Chow's company employs former Disney and Lionsgate executives

  • His casting director came from Nickelodeon

  • The CEO was previously president of MTV

Rather than creators adapting to Hollywood, there's evidence of Hollywood adapting to these new content powerhouses. As Joey notes, the goal isn't necessarily "to turn your brand and go into Hollywood" but rather to bring Hollywood resources into creator-driven operations: "MrBeast got a deal to fund his show on Amazon... but he probably would've been fine without it."

The hosts suggested these creator studios aren't necessarily replacing blockbuster films but rather mid-level network content:

  • Daytime TV formats

  • Children's programming

  • Educational content similar to Discovery Channel or TLC

  • Reality TV formats

Addy points out: "In a lot of ways, the creator economy is picking up a lot of the slack that we have in Hollywood at the moment. There's a lot of action on that side of the fence... people getting hired, productions being made for a completely different use case."

This development highlights how the traditionally separate worlds of Hollywood and digital content creation continue to merge, with the balance of power shifting toward independent creators who have built massive audiences.

Continuous Thought Machines: A New Neural Network Approach

The final segment explored an emerging concept from a Japanese AI company called Sakana: Continuous Thought Machines (CTM).

This theoretical approach represents an important shift in neural network architecture by incorporating explicit timing - making AI processing more like human thought patterns.

Addy explains: "In the world of neural networking, it feels very much asynchronous... Your prompting and your input goes in on one end, and then it goes through these layers as it progresses... But that's not how our brain works."

The key differences between current neural networks and CTM:

  • Traditional networks: Process information sequentially through layers

  • CTM: Incorporates timing to allow multiple thought processes to occur simultaneously

  • Human brain: Processes different sub-dependent thoughts at the same time, which then work together to form more complex thoughts

Addy illustrates with a driving example: "If you're driving, a portion of your brain is processing the road... another portion is probably thinking about where to go... a third portion is probably thinking about safety... and all those three things together combine to the driving decision."

The hosts position this as potentially significant for future AI development:

  • It's distinct from reasoning models that simply add feedback loops

  • Creates true parallel processing rather than just sequential operations

  • May represent an architectural evolution similar to how GPUs changed computing

While still theoretical, this approach could signal a meaningful shift in how AI systems are designed to process information, potentially making them more capable of complex, multi-faceted reasoning.

Conclusion

This episode of Denoised highlights the practical tools emerging for creative professionals, from new ways to control AI image generation to understanding how content creation infrastructure continues to evolve. The hosts provide valuable context on both immediate practical applications and longer-term developments that may shape future creative workflows.

Reply

or to participate.