Lightricks has released an updated version of its LTXV model that generates AI video clips longer than 60 seconds while streaming in real-time. The breakthrough positions the company as the first to enable live-streamed, long-form AI video creation at scale, running on standard consumer hardware with full open-source access.
60-second AI video generation just got unlocked!
LTXV is the first model to generate native long-form video, with controllability that beats every open source model.
- 8× longer than typical gen video
- 10–100× faster & cheaper
- Runs even on consumer GPUs
- Pose, depth &— #LTX Video (#@LTX_Video)
12:49 PM • Jul 16, 2025
Key developments include real-time streaming capabilities, dynamic control throughout generation, and 30x faster performance compared to existing models. Unlike proprietary competitors, LTXV offers both model weights and codebase freely through platforms like Hugging Face.
Beyond the Time Limit: Eight times longer than industry standards opens new creative possibilities
The jump from 8-second clips to 60+ seconds represents more than incremental improvement. This duration enables coherent storytelling for advertisements, game cutscenes, and educational content that was previously impractical with AI video generation.
LTXV achieves this through an autoregressive streaming architecture that generates video frame by frame while maintaining narrative continuity. The model conditions each frame based on preceding outputs, ensuring natural motion flow similar to how text generation models build sentences incrementally.
"Lightricks, the company behind LTX Video (LTXV) and LTX Studio… announced today a major evolution in its AI video technology: an update to LTXV that enables the generation of clips longer than 60 seconds," according to the company's announcement.
Live Direction: Real-time control lets creators adjust videos as they generate
The updated model introduces Continuous Control LoRAs that accept input signals for pose, depth, and canny outlines throughout the entire generation process. This means creators can influence motion capture, shot composition, and stylistic elements dynamically rather than setting parameters only at the beginning.
Key frame conditioning and spatiotemporal guidance enable modifications at both scene and frame levels. Creators can refine specific moments or maintain consistency across longer sequences, addressing one of the biggest challenges in AI video generation.
The X announcement demonstrates these capabilities with examples of extended video sequences that maintain coherence and quality throughout their duration.
Hardware Accessibility: Consumer GPU compatibility democratizes professional-quality generation
LTXV's 13-billion parameter architecture runs 30 times faster than comparable models while operating on standard consumer GPUs. This efficiency removes the high-end hardware barrier that has limited AI video generation to well-funded studios and research institutions.
The model supports both text-to-video and image-to-video inputs, accommodating diverse creative workflows. Whether starting with written prompts or animating existing images, creators can access the same advanced generation capabilities.
Resource efficiency extends beyond hardware requirements. The real-time streaming approach reduces the computational overhead typically associated with long-form video generation, making iterative refinement practical for individual creators.
Open Source Strategy: Free access accelerates innovation across the industry
Lightricks distinguishes itself by releasing both model weights and codebase as open source. This contrasts sharply with proprietary solutions from OpenAI (Sora), Runway (Gen-4), and Pika (Pika AI 2.1), which restrict access and require high-end GPU resources.
The open approach enables researchers and developers to experiment, customize, and fine-tune LTXV for specific applications. Academic institutions can build upon the architecture, while commercial developers can integrate the technology into their own products and services.
This accessibility strategy also powers Lightricks' own LTX Studio platform, demonstrating how open-source foundations can support both community innovation and commercial applications.
Industry Impact: Professional workflows shift toward AI-assisted production
The combination of extended duration, real-time control, and consumer accessibility positions LTXV to influence multiple sectors of media production. Advertising agencies can prototype and deploy finished content with minimal crew requirements. Game developers gain access to dynamic cutscene creation and automated character animation.
Educational content creators can generate tailored teaching segments and explainer videos at scale. The technology's flexibility supports both rapid prototyping and finished production work, potentially reshaping how teams approach video content creation.
However, the advancement also raises considerations around content authenticity and quality assurance. As AI video generation becomes more accessible and sophisticated, professional standards and editorial oversight remain essential for maintaining audience trust.
The Final Cut: Open access to long-form AI video generation accelerates creative experimentation
LTXV represents a significant shift in AI video capabilities, removing traditional constraints on duration, control, and cost. The open-source approach ensures broad access to these advances, potentially accelerating innovation across research and commercial applications.
For content creators, the technology offers immediate practical benefits: longer storytelling formats, real-time creative control, and accessibility on standard hardware. As these capabilities mature, expect to see new hybrid workflows that blend AI generation with traditional production techniques.
The competitive landscape will likely respond to LTXV's open strategy and technical capabilities. Whether through increased transparency, improved performance, or reduced costs, the pressure to match these advances may drive broader innovation in AI video generation.