Runway has unveiled a powerful new feature for its Gen-4 AI video generation platform, giving Gen:48 participants exclusive early access to reference images that allow for unprecedented consistency in character appearance, locations, and scene elements across multiple AI-generated videos.
We have released early access to Gen-4 References to all Gen:48 participants. References allow you to create consistent worlds with consistent characters and locations. This early preview is already available to all teams participating in Gen:48 for free. Good luck with your
— Runway (@runwayml)
2:22 PM • Apr 26, 2025
The new reference functionality fundamentally changes how creators can maintain visual consistency when working with AI generation tools, a long-standing challenge in the industry.
Users can upload a single reference image (like a character or location) and use natural language prompts to place that element in completely different settings
The system preserves the visual characteristics of referenced elements while adapting them to new environments, camera angles, and lighting conditions
Multiple references can be combined (up to three) to create complex scenes with multiple consistent elements
References persist in your workspace, making them easily reusable across multiple generations
Rather than treating each AI generation as an isolated creation, the reference feature transforms Gen-4 into a comprehensive scene-building system.
Creators can generate consistent background elements separately from characters, effectively building visual assets for an entire production
The system allows for varying camera angles of the same scene, opening possibilities for traditional cinematographic techniques like establishing shots followed by closeups
Visual elements can be modified while maintaining core identity (changing outfits, hairstyles, etc.)
Users can create an interconnected "cinematic universe" where characters and locations remain visually consistent
This development represents a significant shift in how production professionals can integrate AI tools into their existing workflows and creative processes.
The ability to maintain visual consistency addresses one of the most significant barriers to using AI in professional production environments
For storyboarding and pre-visualization, this feature dramatically reduces the time needed to create coherent sequential imagery
Virtual production teams can now quickly generate consistent concept art to inform LED wall content creation
The simplified workflow (drag and drop references combined with natural language prompts) makes the technology accessible to creative professionals without technical expertise
The early access nature of this feature suggests Runway continues its aggressive development pace in the competitive AI video generation space, likely setting expectations for what may become standard functionality across the industry. As these references become more sophisticated, expect the line between traditional VFX pipelines and AI-assisted workflows to blur further, potentially reshaping production roles and specializations.
Reply