sync. launched react-1, a ten-billion-parameter masked video diffusion model that lets editors change an actor's on-screen performance—emotion, facial expressions, head movement, and timing—without reshoots, using new audio and guided emotional direction.

  • Beyond lip sync - react-1 "learns from your uploaded audio and reanimates the entire face," editing facial expressions, head movements, and timing across the performance, not just lip sync. You upload existing footage, provide new audio, guide the emotional direction, and the model generates a new performance while maintaining the actor's identity and style.

  • Emotion-level control - The interface offers selectable emotional reads like surprised, angry, disgusted, sad, happy, and neutral, letting editors "explore different reads with the click of a button" and change the emotional beat of a performance in post.

  • Localization angle - sync. positions this for dubbing workflows, claiming react-1 "doesn't just localize the lines it localizes the entire performance," targeting global content pipelines where emotional performance needs to match new language or script changes.

  • Post-production integration - Built on sync.'s existing lip sync API infrastructure, react-1 works "on any video content in the wild—across movies, podcasts, games, and even animations" and is available via API and self-serve web product with a "Start for free" onboarding flow.

Reply

or to participate

Keep Reading

No posts found