
At the ISC High Performance 2025 event, KAYTUS launched a new version of its MotusAI platform. The upgraded system helps businesses run large AI models faster and more reliably. Known for its AI infrastructure and cooling systems, KAYTUS now aims to solve long-standing issues in large model deployment. These include slow setup times, unstable services, and poor resource use. MotusAI’s new features promise better scheduling, monitoring, and support for popular tools. “We built MotusAI to simplify how companies adopt and operate large models,” said a company spokesperson. The platform targets sectors like finance, energy, education, and manufacturing.
Smarter Tools for Faster, Smoother AI Deployment
MotusAI is a DevOps platform designed for managing and running large language models. The upgraded version improves performance and speeds up deployment for enterprise users. It now supports dynamic resource scheduling, allowing better use of computing power during model operations.
The AI platform includes built-in monitoring tools that track everything from hardware to software services. These tools diagnose problems automatically and recover from faults quickly. The platform also supports scaling based on real-time demand, which helps services stay stable during peak use.
To handle fast-changing tools in the AI world, MotusAI integrates open-source systems like SGLang and vLLM for inference tasks. For data tasks, it offers tools like LabelStudio and OpenRefine. This gives developers a complete toolkit for every step, including data labeling, training, testing, and deployment. “We wanted to give users control over the whole AI model lifecycle,” the spokesperson said. “With MotusAI, they get that in one place.”
MotusAI Shows Real-World Impact Amid Early Adoption Hurdles
Companies using large AI models often face long deployment times and low hardware utilization. MotusAI directly tackles these issues. Its new scheduler uses smart GPU allocation to handle training and inference on the same node. This change avoids the waste of splitting tasks across different machines.
The scheduler supports fine-grained GPU partitioning and NVIDIA’s MIG (Multi-Instance GPU), which lets teams run multiple small tasks at once. This flexibility helps startups and smaller firms get more from limited resources. In early tests, MotusAI’s new scheduler delivered five times more task throughput and cut latency by 80%. These improvements help reduce delays when launching new AI features. “We saw huge gains in speed and resource use,” said one early adopter from the automotive sector.
Still, experts note that managing complex tools on a single platform can require additional training. Some teams may need time to adjust. However, Kaytus says that built-in automation and easy-to-use dashboards aim to lower that barrier. Looking ahead, the company plans to expand MotusAI’s support for more model types and cloud environments. It’s also exploring new features for edge AI deployments in remote or mobile setups.
MotusAI’s Upgrade Signals a Broader Shift in AI Operations
The newest version of MotusAI reflects a growing need for smarter AI infrastructure. As more industries adopt large models, demand increases for platforms that simplify the complex aspects. MotusAI meets this demand with better performance, wider tool support, and flexible scheduling. Its improvements suggest a shift toward unified AI operations, where developers, data scientists, and engineers work on the same system. With this launch, KAYTUS positions itself as a key player in the next phase of enterprise AI adoption, one focused on speed, scale, and real-world impact.