At NAB, Imaginario AI launched StoryLab, a feature that takes a natural-language creative brief and returns a multi-video rough cut by routing the job through three specialized AI agents. The platform sits on top of multimodal indexing that breaks footage down to the shot and scene level across vision, speech, and sound, then puts that index to work on automated curation tasks.
Co-founder and CEO Jose Puga positioned Imaginario in the editing tool stack: not raw-interview cleanup, but curation of finished content into highlights, summaries, and compilations.
From System of Record to System of Action. Puga framed the product shift in those terms. Imaginario started by improving video search accuracy, indexing content across visual shots, dialogue, music, sound effects, and ambient sound. According to Puga, the indexing combines those modalities into a unified understanding down to the scene level. With that foundation in place, the company is now layering curation features on top of the index, including highlights, clips, rough cuts, and insights.
The platform stays storage-agnostic. Imaginario supports Google Drive, Dropbox, Box, Frame.io, Backblaze, Wasabi, and AWS S3, with more integrations planned. Puga said the goal is to avoid forcing teams to subscribe to another MAM system or re-upload assets.
The Three Agents Behind StoryLab. A user types a creative brief in natural language, selects source assets, sets a duration, and picks moods or shot types. That activates a chain of three agents:
Archive researcher. Searches the indexed footage for relevant visual shots, dialogue, and sound design, building a pool of candidate material against the brief.
Copywriter. Takes the candidate pool and writes a story arc with an introduction, main actions, and resolution, organizing scenes into a coherent narrative.
Assistant editor. Adjusts the start and end points of each selected scene, functioning as what Puga called a junior editor handling timing optimization.
In a demo using two Lord of the Rings films, StoryLab processed a 10-minute summary brief in 42 seconds. The output preview lets editors trim handles, reorder scenes, or swap selections before exporting.
Editing Suite and Distribution Integrations. StoryLab is targeted at finished content rather than raw interview footage, with summaries, highlights, and compilations in the 2-to-10-minute range as the current sweet spot. A separate Single Clip feature handles shorter cuts from finished pieces, automatically tracking speakers and adding subtitles for direct export to YouTube Shorts, TikTok, or Instagram Reels.
The platform currently integrates with Premiere and Resolve, with Avid support planned for Q3 2026. Puga said CapCut and Canva integrations are also on the roadmap as Imaginario expands beyond post-production into content marketing teams.
On-Prem and Enterprise Controls. Imaginario runs as a cloud app on AWS infrastructure (S3 storage, EC2 GPU and CPU instances) but Puga said an on-prem version is targeted for end of Q3 2026. The on-prem build is aimed at clients in industries sensitive to data training, including teams that want to buy their own hardware and process content locally. The intelligence layer may stay hybrid, with LLM calls from providers like Claude or OpenAI handling reasoning while indexing runs on-site.
On agent autonomy and MCP-style access, Puga was direct about enterprise constraints. He said open-source agent stacks are not ready for primetime in regulated industries like healthcare and financial services, citing cybersecurity assessments and the need for deterministic behavior. Imaginario's approach is to treat MCP as another context channel feeding its existing agent logic rather than handing control to a general-purpose orchestrator.
Vibe Coding the Roadmap. Puga, who also serves as Imaginario's chief product officer, said he prototypes weekly using the Imaginario API plus open-source models to test new verticals before committing engineering time. He cited a hypothetical CCTV use case (license plate or vehicle recognition) as the type of feature he can build a working demo for in days, then validate with prospects before greenlighting production work. According to Puga, the approach has saved months of misallocated engineering across the team.
Further out, the StoryLab roadmap adds a general researcher agent that ingests production notes, scripts, brand guidelines, and franchise context before the archive researcher runs. Puga also flagged plans for analytics-driven clipping, where agents pull YouTube performance data and trend signals from a target demographic to inform what gets cut and how. The 9:16 reformatting, caption templates, transitions, and music layers already exist elsewhere in the product and are slated to roll into StoryLab once the rough cut layer is stable.


