CuttingRoom's new Shortcut feature connects video editing to AI agents through a text prompt interface, letting editors build timelines, add captions, translate content, and run QC checks without leaving the browser-based NLE.

Unveiled at NAB 2026, Shortcut integrates with editors' own AI licenses (Anthropic, OpenAI, or any LLM) and external MCP servers, giving AI agents access to media asset managers, transcription services, and the full editing timeline. The system keeps human editors in control while automating repetitive tasks that slow down post production.

How Shortcut Works

Editors type natural language prompts into the Shortcut interface, and the system routes them through connected AI services to execute editing tasks. In a live demo at NAB, CuttingRoom co-founder Glenn Pedersen showed a prompt that searched a media asset manager (Iconic) for a specific interview, identified the two best quotes from the transcript, built a timeline with those clips, added intro/outro segments with audio dissolves, and generated captions, all from a single instruction.

The key architectural decision: CuttingRoom does not process any AI data itself. Editors bring their own LLM licenses, and the system connects to external MCP servers for specialized tasks like transcription, metadata search, and translation. According to Pedersen, "We will never add an AI service that you don't know about. Everything's opt in so that you have complete control over where your content is used."

Reusable team shortcuts. Once an editor finds a prompt that works, they can save it as a named shortcut and share it across their team. This turns ad-hoc AI workflows into standardized processes that any team member can execute.

MCP hub architecture. CuttingRoom serves as a hub connecting multiple AI services through MCP (Model Context Protocol) servers. The unified interface means the LLM has a broader view of available data and tools, pulling from media asset managers, transcription services, and content analysis platforms depending on the task.

What Editors Can Do With It

  • Build timelines from prompts. Search transcripts, find specific quotes, assemble clips with transitions and captions from a single text instruction.

  • Automate audio cleanup. For podcast and interview content with multiple microphones, Shortcut can automatically mute one speaker's track when the other is talking, handling the tedious task of cleaning up coughs, chewing, and background noise.

  • Translate captions. Change caption languages through a prompt. The LLM handles translation directly, or Shortcut can route the request to a specialized translation service.

  • Run QC checks. Ask the AI to scan a timeline for swear words, black frames, duration compliance, or other quality control criteria.

  • Connect from external agents. CuttingRoom has built its own MCP server, so editors can access the full editing workflow from external tools like Claude Desktop. According to Pedersen, "The AI can do everything I can do," enabling workflows where field recordings get rough-cut automatically before the editor even sits down.

Model Agnostic and Privacy First

Shortcut supports any LLM, including offline models that run locally in the browser. No content passes through CuttingRoom's servers for AI processing. The model-agnostic approach means teams can use whatever AI service fits their security requirements and budget, from cloud-based APIs to fully local inference.

Availability

Shortcut launched at NAB 2026 and will roll out to all CuttingRoom users following the show. The platform remains browser-based and cloud-native, accessible from any location without local software installation.

Reply

Avatar

or to participate

Keep Reading