
Lightricks is rewriting the rules of AI filmmaking with the world’s first long-form AI video model. This model enables livestreaming that is editable in real time. The Israeli startup has introduced LTXV, which is well-known for mobile sensations like Facetune and Videoleap. This provides in-the-moment refinement and up to 60-second clips.
An autoregressive engine powers this next-generation model, which lets users manipulate their video as it is being created. Co-founder and chief technology officer Yaron Inger claims that this changes AI content from being “prompted” to being “directed.” It makes high-quality storytelling possible for creators worldwide.
Lightricks Pushes Boundaries With Real-Time Storytelling Power
LTXV transforms generative video into a fluid process by allowing users to enter new prompts during the creation process. LTXV begins streaming immediately and continuously produces the video in real time, unlike other models that require waiting minutes. Through the use of overlapping frame segments, it provides smooth real-time rendering while maintaining consistency in motion, character, and plot.
Additionally, viewers can now enjoy live-edited scenes without logical lapses, like a gorilla hugging a woman while she cooks. This puts Lightricks ahead of competitors that still use pre-rendering, such as Runway, Veo, or OpenAI’s Sora.
Is This the Fastest AI Video Model Yet?
Creators can now produce long-form content with the same ease as conversing with an AI chatbot thanks to LTXV’s new autoregressive core. Using user feedback, the AI video model builds the storyline scene by scene after instantly returning the first second. Lightricks maintains open development, in contrast to proprietary systems.
Additionally, Hugging Face and GitHub now have the mobile-optimized 2-billion model and the 13-billion parameter version. It lowers the cost of creating videos by running smoothly on even consumer laptops or a single Nvidia H100 GPU.
Early adopters’ social media posts represent seamless, continuous stories. The model’s adaptive layering makes this possible by simulating how big language models handle continuity. Additionally, the business maintains its ethical foundation.
LTXV is trained using only licensed data from Shutterstock and Getty Images to avoid copyright concerns. Additionally, its outputs can be refined within LTX Studio. This is a commercial tool used by developers and artists to edit and enhance generated content.
Lightricks Expands AI Video Model Use Cases Worldwide
Lightricks’ real-time video generation has many applications outside of social media. The company wants video game cutscenes to be dynamic and change as the player progresses. Similarly, AI dancers that follow the beat might be featured at augmented reality concerts.
Additionally, the tool can help those who create educational content. Real-time adaptation of interactive videos to student interactions could increase student engagement. These flexible, real-time components enhance learning and go beyond idle viewing.
Lightricks is in negotiations with major production companies for revenue sharing and licensing to facilitate scaling. The startup is investigating commercial use in media and entertainment pipelines while maintaining the open weights for creatives. Even small teams can create long-form cinematic content thanks to affordable real-time rendering.
Will You Be the Next AI Storytelling Pioneer?
Lightricks’ new AI video model turns creators into directors instead of just prompt writers. This tool is quick, flexible, and designed for anyone who has a story to tell. As a gamer, educator, or filmmaker, you can now influence the direction of visual storytelling.