Axle AI rolled out its fourth-generation tagging engine at NAB, expanding semantic search across video, transcripts, and human-applied tags in a single index. The MAM and DAM platform also debuted a native Avid Media Composer panel and is now letting customers extend the product themselves using Claude. AI processing runs on-premise, a deliberate counter to the cloud-default tooling that dominates elsewhere in the category.

Bogoch also referenced the company's Portfolio DAM acquisition from Extensis, a stills-focused product line with customers including universities, libraries, and Lockheed Martin. The deal effectively doubles Axle AI's business and gives Portfolio users access to the same on-prem AI and workflow automation stack that powers the video product.

Fourth-Generation Axle Tags. The current version of Axle Tags, the company's tagging and search engine, expands semantic search from the video content itself to a combined index of video, transcripts, and human-applied tags. According to Bogoch, the engine "searches the whole enchilada" so a query hits captions, speech-to-text, and metadata in one pass.

  • On-prem by default. Bogoch estimates 90% of AI tooling discussed in the industry runs on cloud infrastructure, which he said creates privacy, security, and cost issues that most of Axle AI's customers will not accept. He pointed to Amazon Rekognition's terms of service, which permit retaining uploaded content for training, as the kind of fine print that "a lot of studios would freak out" about.

  • Petabyte-scale archives. Bogoch said 4K and 8K acquisition is pushing more customers into petabyte territory, where cloud egress and storage costs make on-prem hardware the more practical option.

  • Hybrid storage reality. Post-COVID, customers have returned to a balanced model: storage and editors co-located in the office, with remote freelancers accessing assets through hybrid setups. We previously covered Axle AI's expansion to Mac mini and Mac Studio for on-prem processing.

Avid Media Composer Panel. New at the show is a Media Composer panel that lets editors search the Axle AI catalog and drag clips directly into the timeline, matching the integrations Axle AI already ships for Premiere Pro and DaVinci Resolve. Axle AI previously supported Avid OP-Atom MXFs at the file level, but the panel makes the workflow native inside Media Composer. We covered the Media Composer panel launch in greater depth.

According to Bogoch, the integration reflects what Avid customers are asking for: open systems that talk to Avid without locking them into a closed pipeline.

Agentic Workflows and Claude-Built Customization. Bogoch described agentic AI for video as being at "1.0" today, with most teams brainstorming rather than running production work through agents. Axle AI's existing Connector workflow tool can talk to agentic engines, and Bogoch expects the space to move quickly over the next six months.

The bigger structural shift, he said, is how customers extend the product themselves using Claude. Because Axle AI's stack is modular and exposes REST APIs, users can build the last 10% of functionality they need without waiting on the vendor roadmap. Internal teams are working the same way.

  • ONNX model support. Customers can train and plug in their own machine learning models for category-specific recognition (Bogoch cited airline logos, flora and fauna, and undersea content as examples) when the default AI models don't cover their use cases.

  • Customer-built integrations. A Portfolio customer built a WordPress publishing interface for pushing subsets of stills directly from the DAM, demoed on a recent Axle AI webinar. According to Bogoch, "a few days of vibe coding" can now replace a quarter of vendor roadmap waiting.

  • Coders as product managers. Bogoch said Axle AI's engineers are shifting toward managing "Claude minions" rather than writing every line of code themselves, compressing what used to be a year of features into roughly two weeks.

Portfolio DAM and Workflow Automation. Portfolio handles stills at scale (millions of files), with on-prem deployment positioned as a way to control storage costs as file sizes climb. Now part of Axle AI, Portfolio users get the same Connector workflow engine that handles automation rules on the video side.

According to a Portfolio representative on the interview, common automated workflows include:

  • Consent capture for biometrics. Triggered by the Biometric Information Privacy Act, a Connector flow can look up a subject's email from an identified face, send a consent form, and gate downstream facial recognition processing on the response.

  • Metadata validation routing. Files arriving without complete metadata, or with AI-applied tags that need a human-in-the-loop check, can be flagged and routed to a reviewer before publishing. We covered Axle AI's Searchr portable media system for similar workflow patterns on the video side.

  • Rule-based ingest. Incoming stills can be routed to specific Portfolio locations based on metadata values, keeping the interface simple while the automation runs underneath.

Facial recognition and object recognition models from the Axle AI side now run against stills inside Portfolio (transcription excluded, since stills don't have audio), supplementing human-applied metadata.

Reply

Avatar

or to participate

Keep Reading