In a recent episode of VP Land, Addy and Joey dove into three tightly related topics that are reshaping modern media production: the controversial Sphere presentation of The Wizard of Oz, a bold new platform called Showrunner that bills itself as a kind of "Netflix of AI," and the practical ways filmmakers are combining traditional VFX techniques with generative AI tools. This article unpacks the technical realities, the creative trade-offs, and the cultural reactions behind those stories while laying out practical takeaways for creators, producers, and curious audiences.

The Sphere's Wizard of Oz: What Happened and Why People Reacted

The Sphere's Wizard of Oz presentation aimed to bring an archetypal film to a 21st-century, 16K immersive canvas. The experiment combined re-engineered audio, interactive theater elements, and visual expansions of a 4:3, near-century-old film to fill a massive dome. The result was meant to be an homage and an immersion—but the first footage shown on CBS Sunday Morning sparked a wave of confusion and memes online.

At the core of the controversy were a few visible choices: enlarged peripheral imagery, vivid saturated skies, and newly generated background content that changed the cinematic composition audiences had long accepted. In some demo clips, the creative team attempted what would best be described as "outpainting"—expanding the original 4:3 frames outward to fill a vast spherical display. That meant filling in formerly off-screen areas, sometimes requiring the generation of new content (for example, a character stepping out of frame that needed to persist in the expanded view).

Outpainting on a creative classic like The Wizard of Oz is inherently risky. Audiences have strong emotional attachments to specific framing, pacing, and audio cues; morphing those elements into wide, immersive panoramas changes the work's identity. Critics reacted to the perceived mismatch between the film's original composition and the new peripheral imagery, and social media quickly produced meme versions that amplified the issue.

Another technical complication: the Sphere is not a flat screen. It's a hemispherical, theatrical VR-like environment that effectively places thousands of audience members inside a massive headset. That introduces new constraints—motion sickness sensitivity, viewer field-of-view differences, and the impossibility of simply scaling up the center of the frame without addressing the periphery. If Dorothy looks like a 100-foot-tall figure from some vantage points, the effect can be jarring and disorienting.

But the full story is more nuanced. Multiple VFX artists and studios reportedly worked for months to enable the conversion, and many of the visible elements required painstaking manual work: upscaling, repairing AI errors, compositing, and retouching. Those human contributions are easy to overlook when the press shorthand becomes "AI did it," but the reality appears to be a hybrid workflow that leaned heavily on traditional VFX skills to make the end result presentable at extremely high resolutions.

Why AI Got the Blame—and Where That Critique Falls Short

Online outrage tended to center on a simplified narrative: AI automatically remixed a beloved movie for profit. That narrative is emotionally persuasive, but technically inaccurate. There are major hurdles to using generative AI directly at 16K or beyond. Most current tools are targeted at consumer or production-level outputs in the tens to low thousands of pixels at best. Taking a 4:3, lower-resolution film and expanding it to a 16K spherical display requires multi-step upscaling, hand-correction, and careful compositing. AI can accelerate parts of the pipeline—outpainting, texture synthesis, or background generation—but it does not yet replace teams of artists in high-end production pipelines.

VFX artists who spoke up on Reddit and other forums made that point strongly: months of late nights, set rebuilds, paint work, and compositing went into the Sphere project. To call the finished experience "just a computer" overlooks the labor and craftsmanship involved in remastering and recontextualizing archival film for an immersive venue.

Audio, Interactivity, and the Business Headache

Visuals were only one piece of the puzzle. The Sphere boasts a bespoke audio system with roughly 160 finely tuned speaker elements and a full re-engineering of the original mono soundtrack. The producers also added interactive elements—wind machines for the tornado sequence, seat feedback, voting or button interactions in some shows—so it's a multi-sensory event rather than a simple projection.

From a business perspective, the Sphere gambled on an expensive conversion process—reportedly in the millions—to create a unique draw for Vegas tourism and recurring ticket sales. That investment forces difficult creative decisions: should the original be left intact, should new scenes and shots be stitched in, or should the film be re-shot entirely with modern cameras? The team chose to retool the original, even cutting roughly 30 minutes for reasons not fully disclosed (pacing, motion-sickness concerns, or the practical need to increase daily showings were all floated as possibilities). That choice added fuel to the debate about authenticity and whether the experience still qualifies as The Wizard of Oz.

Showrunner AI: The “Netflix of AI” and What It Actually Aims to Do

Showrunner is an emergent platform that has attracted attention (and investment) because of a simple-to-sell vision: let anyone generate episodic animated content starring themselves, with AI handling character rendering, dialogue, and episodic generation. Its first publicized property, Exit Valley, is a riff on Silicon Valley in which users can insert themselves into a shared universe and generate spin-off episodes or short series.

Calling Showrunner the "Netflix of AI" is provocative but reductive. It promises high volume of content and personalization rather than curated, auteur-driven programming. The underlying economics likely include a major cloud compute play—turning AWS GPU cycles into a predictable revenue stream—plus a possible licensing model that could let IP owners monetize fan-created "side quests" inside franchise universes.

Showrunner's value proposition breaks down into a few scenarios:

  • Personalized animated episodes for small groups or friends—think of it as a creative hangout producing short, private entertainment.

  • Fan-driven extensions of existing IPs where licensed worlds (a Lord of the Rings-like example was discussed hypothetically) become playgrounds for user-generated spin-offs.

  • Rapid prototyping for studios—discovering breakout ideas via user engagement and elevating the most popular fan-created narratives into higher-budget productions.

There are cultural questions too. The ritual of collective viewing—shared premieres, weekly watercooler moments, podcasts and reaction videos—fuels mainstream conversation. Personalized, individually generated shows can fracture that shared space. But younger audiences who grew up with Roblox, Minecraft, and social hangouts-as-games may find the hybrid entertainment model more natural: less passive, more co-creative.

Where Showrunner Could Matter

If Showrunner becomes a fertile incubator for prototyping micro-IP that scales, it could be a new pipeline for content discovery. The company behind it has a track record of building automated episodic experiments (notably a South Park experiment that ran into IP issues), and the vision is to enable remixable, multiplayer, and personalized shows. Whether the platform produces lasting cultural hits or remains a curiosity may depend on licensing partnerships, moderation, and the quality of autogenerated storytelling.

Practical Workflows: How Creators are Mixing Traditional VFX with Generative AI

Beyond big experiments and platform bets, smaller creators and VFX artists are showing practical ways to blend generative AI into usable production pipelines. The trend is not "replace the artist" but "amplify the artist." Several creators and case studies demonstrate the most promising approaches:

  • Hybrid compositing: Shoot real actors on green or practical sets, then replace backgrounds with generative elements. This retains human performance (still the clearest path out of the uncanny valley) while achieving dramatic environmental transformations.

  • Background generation + manual cleanup: Use AI to create expansive skies, crowds, or distant assets, then have artists paint, track, and composite those elements at higher resolution where needed.

  • Iterative, scene-level adoption: Producers and creators deploy AI for individual shots or scenes rather than entire features. This avoids continuity issues and lets teams discover where AI is strongest (e.g., matte backgrounds, crowd fills).

Examples discussed included a "pirate" short by Albert Bozesan where creators shot a campy, attic-based pirate film and used generated environments to create a convincing world. Another creator performed a Super Mario-style transformation sequence using image-to-video workflows and compositing to sell a surprising physical change while retaining authentic lighting and contact. Rory Flynn pulled Google Earth imagery and other generative assets into a naval battle sequence built from aerial plates and AI-sourced content.

Best Practices and the Road Ahead

From the conversations, several practical lessons emerge:

  1. Keep actors real when human presence is central. Generative humans are still in the uncanny valley for many viewers.

  2. Use AI as a force multiplier, not a replacement. The most compelling results come from AI-generated elements that are then refined by artists.

  3. Build tools into existing pipelines. A Nuke plugin or compositing-friendly exports would accelerate adoption by allowing teams to slot generative outputs into familiar workflows.

  4. Iterate at the scene level. Consistency across angles and lighting is the next challenge for AI-driven sequences; multi-angle scene work is the next frontier.

Filmmakers like Kavan the Kid and collective efforts such as Phantom X (which pursued SAG approval for AI-assisted productions) illustrate possible paths forward: hybrid creative models that accept both human craft and algorithmic assistance. The future will likely be a continuum where some projects are fully AI-generated, but many more will be human-led with clever generative augmentation.

Conclusion: Experimentation, Context, and Respect for Craft

The Sphere's Wizard of Oz experiment, Showrunner's ambitious proposition, and the creative demos from small creators together map an industry in transition. Each project reveals opportunities and pitfalls. The Sphere provoked valuable cultural debate about authenticity, preservation, and how to modernize archival works. Showrunner poses interesting questions about personalization, community, and the economics of cloud compute. And the creator demos remind the industry that the best short-term path is hybrid: combine human performance and artistry with generative tools that accelerate ideation and lower budgets.

Ultimately, the conversation is not about whether AI will vanish human creativity, but about how artists adapt. The most productive responses will center on skillful blends of art and tools, sensible licensing and IP strategies for fan-driven content, and honest communication about the human labor behind polished results. If anything, these early experiments serve as a reminder that technology can open new doors—provided creators, audiences, and industry stakeholders remain thoughtful about the choices they make.

Reply

or to participate

Keep Reading

No posts found