Val Kilmer is starring in a film he never shot. Improv actors are selling their emotions at $74 an hour. An AI Andy Cohen is interviewing real people on Peacock. And Google just dropped a vibe design tool that made Figma's stock dip. This episode sits at the intersection of what AI can do, what it probably shouldn't, and what filmmakers actually need.
Quick Take
The through-line across this episode is the gap between what AI can technically do and whether anyone should want it. Val Kilmer's posthumous performance has family consent and union approval behind it, which makes it legible. The Handshake AI improv pipeline is legible too, and we think it will produce generic mush. The AI chatbot journalism stunts have no framework at all. Meanwhile Google is quietly rebuilding Figma's and Replit's entire product category in one release, and Apple is off in a corner shipping research for 3D mesh reconstruction that nobody else seems to be chasing.
What We Debated: Val Kilmer's AI Return in "As Deep as the Grave"
Val Kilmer was cast as Father Fintan in As Deep as the Grave in 2020, five years before his death. He was too sick with throat cancer to shoot a single scene. The production is now reconstructing his performance using archival footage and family-provided images, with his daughter Mercedes and son Jack backing the project. The film follows SAG guidelines and compensates his estate.
We pushed on whether this is categorically different from Ian Holm's reconstructed appearance in Alien: Romulus. The answer we kept landing on: not really. Both involve a deceased actor, family sign-off, and digital recreation. The cleaner comparison people reach for, Paul Walker in Furious 7, doesn't actually work here. Walker had shot most of the film before his death. Kilmer shot nothing. Every frame is generated.
The marketability angle matters more than it gets credit for. This is an indie film, and having Kilmer on the marquee is the difference between a movie that gets distribution and one that doesn't. That's the working calculus for small productions.
What we couldn't evaluate is the performance itself. Nobody has seen it. A deepfake with nothing underneath it is not the same as a performance captured on set and then re-lit. The question of whether the reconstructed Kilmer reads as a character or as an artifact only gets answered when the film releases.
What We Questioned: Handshake AI Pays Improv Actors $74/Hour for Emotion Data
Handshake AI is running Zoom sessions with improv actors to capture emotional performance data at $74 per hour, feeding it into models for "one of the leading AI companies." The company's demand for training data tripled last summer, and it crossed a $150 million run rate in November.
Addy's prediction: this is going to fail at producing what it's ostensibly trying to produce. You cannot mash 1,000 actors' versions of "happiness" into one latent space and get a compelling performance out the other end. You get the average of happiness, which reads as a greeting card. The thing that makes Val Kilmer's happiness different from Tom Cruise's happiness is not a data point you can capture by asking 1,000 people to perform happiness into a webcam. It's the individual.
So where does this data go? We landed on gaming and NPCs as the likely end use. A non-player character in a video game doesn't need to be Daniel Day-Lewis. It needs to emote in a way that reads as human across a broad range of triggers, and generic is actually fine there. For film performance, we don't see it. The individuality of a performance is the asset, and aggregate training is the opposite of that asset.
What We Examined: AI Andy Cohen, Glenn Beck's Washington, and Claude Playing Dario
Peacock has rolled out an AI Andy Cohen chatbot. Glenn Beck sat down for a long interview with an AI George Washington. Vanity Fair ran an interview with Claude while prompting it to pretend to be Dario Amodei, Anthropic's CEO. We spent this segment asking the same question three times: why.
The Andy Cohen version is the most baffling. Real Andy Cohen is prolific, accessible, and does interviews constantly. The marginal utility of an AI version is unclear even as a novelty.
The Glenn Beck interview with "George Washington" is a different failure mode. The model has no privileged access to Washington's inner life. It's recombining text. Treating the output as if it tells us something about the historical figure is a category error, and broadcasting it as an interview gives the output a journalistic credibility it has not earned.
The Vanity Fair piece is the one that worried us most, because it was published in a reputable outlet. Claude-pretending-to-be-Dario is not Dario. Framing model output as an executive's views, even with disclaimers, creates a new kind of sourced-but-not-real quote that is going to migrate into other coverage. We don't have the journalistic framework for this yet, and outlets are already printing it.
What We Tested: Google Stitch and the Full-Stack Vibe Coding Platform
Google launched Stitch, a vibe design platform that turns sketches and text prompts into high-fidelity UI designs. Figma's stock dipped on the announcement. In the same window, Google AI Studio added a full-stack vibe coding environment with a built-in database and authentication layer, which pushes directly into Replit and Vercel territory.
The capability bundle here is aggressive. Sketch to design. Design to deployed app. Database and auth included. If it works, it compresses a workflow that today spans three or four companies into one tab.
The Google-kills-products anxiety is real. Stitch and Studio sit in Google Labs, which is where a lot of tools go to be sunset. Anyone betting a production workflow on either should assume a one-to-three-year shelf life and plan accordingly.
The part we kept coming back to is the entry-level job picture. Junior frontend and design roles already absorbed the first wave of AI compression. A full-stack vibe coding platform with a real database behind it collapses another layer. The ladder to senior work assumes a bottom rung that is actively getting filed down. That's a talent pipeline problem, not a tool problem, and it's not one any single company is going to solve.
What We Explored: Apple's LiTo and the 3D Mesh Research Nobody Else Is Doing
Apple's machine learning team published LiTo, a method for generating 3D meshes with light-accurate textures from input imagery. The targeting is narrow and deliberate: surface light fields, proper baked lighting, real geometry suitable for spatial computing.
Nobody else is shipping this. OpenAI and Google are chasing text-to-video and general agents. Meta is on avatars and mixed reality UX. Apple is quietly publishing 3D reconstruction research that no one else prioritizes, because Apple has one product that needs it specifically: Vision Pro, and whatever follows.
The Vision Pro install base is small. The research pipeline feeding it is being built anyway. Three years from now, when spatial content creation tools have to exist, Apple will either have the models in production or will have spent the research budget and shipped nothing. Either way, it's the only company treating this as a must-solve.
What We Covered: Runway, Photoshop Rotate Object, and Firefly Custom Models
Runway and NVIDIA pushed a sub-100ms text-to-video developer preview, approaching real-time generation. The technical milestone is real. The honest limitation is that text is still the weakest input modality for video. Describing a shot in words throws away the thing a filmmaker actually knows, which is spatial and temporal composition. Krea's real-time object manipulation is closer to what on-set iteration actually looks like. Joey has been using Beeble's SwitchX for a One Battle After Another project, which is the kind of video-to-video input that actually matches how a shot gets made.
Photoshop shipped Rotate Object, which takes a 2D image element and lets you rotate it as if it were a 3D object, filling in the hidden geometry on the fly. Addy tested it and it works. This is the kind of discrete, well-scoped AI feature that lands cleanly because it has one job.
Adobe Firefly opened custom model training in public beta, where you upload 10 to 30 JPEGs of your own artwork and get a model tuned to that style. It's a consumer-grade LoRA, essentially, with the sharp edges filed off. The demos so far are illustration styles, not photorealism, and we suspect that's because the photorealism case is where the legal conversation gets harder. For illustrators with an established style, it's an immediately useful tool.
On the open-source side, we keep pointing back to our ComfyUI CEO interview for context on where the node-based generation ecosystem is heading. And Netflix's InterPositive acquisition is the studio-side version of the same story: custom models, trained per production, controlled by the shop that owns the footage.
Bottom Line: Consent, Data, and Who Owns the Output
Three bets are visible across this episode. Kilmer's estate bet that consent and union approval make a posthumous AI performance legitimate. Handshake AI bet that you can buy your way to human-quality emotion with an improv Zoom call. Google bet that one company can own design, code, auth, and database in a single vibe coding surface.
The Kilmer bet is the one most likely to hold, because the framework around it already exists. The Handshake bet is the one most likely to produce something that looks right in a demo and feels wrong in a film. The Google bet is the one that reshapes the most businesses if it lands, and erases the most entry-level jobs on the way. Each one is a different answer to who owns the output when the output is generated.
Links from This Episode
Featured Stories:
Tools & Platforms:
Related VP Land Coverage:





