
Mumbai-based AI firm Fractal has launched Fathom-R1-14B, an open-source reasoning model focused on mathematical and logical problem-solving tasks. This launch marks a significant step for India’s AI ambitions under the IndiaAI Mission, elevating its role in global AI innovation. Fathom reflects a growing shift toward transparent, accessible AI research and sets a new precedent for high-performance, open-weight models built with national-scale collaboration in mind.
National Vision Meets Global Standards Through Fathom-R1-14B
According to reports, Fractal, a leading Mumbai-based AI startup, has launched Fathom-R1-14B, a 14-billion-parameter open-source reasoning model for sovereign AI. Developed under the IndiaAI Mission, this initiative marks a major milestone in building India’s first Large Reasoning Model (LRM) for national capability.
Fathom-R1-14B, built on the Deepseek-R1-Distilled-Qwen-14B architecture, handles complex mathematical and logical tasks with a 16,000-token context window. This extended context allows the model to process large information sequences in one prompt, improving reasoning depth and response accuracy. Fractal optimized the model for efficient deployment, achieving a remarkably low post-training cost of just $499, enabling broader accessibility and use.
Fractal’s AI research team refined the model using supervised fine-tuning, curriculum learning, and model merging to boost reasoning capabilities. They also developed Fathom-R1-14B-RS, a third variant combining reinforcement learning and SFT for improved performance with only $967 in training costs.
Fathom-R1-14B, built on the DeepSeek-R1-Distilled-Qwen-14B framework, was trained using a layered approach with advanced tuning techniques. Fractal applied supervised fine-tuning, curriculum learning, and model merging to enhance the model’s reasoning depth and overall response accuracy significantly. Srikanth Velamakanni, the CEO of Fractal said in a LinkedIn post that,
Today’s large pre-trained AI models are great at summarization, information retrieval, and content generation… That’s why we, at Fractal, proposed building India’s first Large Reasoning Model (LRM), a next-generation AI system built to work with open-source LLMs, trained on Indian data, and designed to tackle real-world complexity.
Fathom-R1-14B is openly released under the MIT license, allowing free academic and commercial use. All model files, training data, and recipes are available on Hugging Face and GitHub, promoting transparency, collaboration, and accessibility, especially for education, research, and low-resource settings.
Technical Excellence and Performance Metrics
Fathom-R1-14B sets a new benchmark for open-source reasoning models, excelling particularly in advanced mathematical problem-solving capabilities. With 14 billion parameters and a 16,000-token context window, the model delivers high-performance reasoning while staying cost-effective and scalable. In extensive evaluations on competitive math benchmarks, Fathom-R1 consistently achieved exceptional results, reinforcing its position as a leading reasoning model.
- 52.71% Pass@1 on AIME-25 (American Invitational Mathematics Examination)
- 35.26% Pass@1 on HMMT-25 (Harvard-MIT Mathematics Tournament)
When combined with inference-time compute upgrades (cons@64), its accuracy increased to 76.7% on AIME-25 and 56.7% on HMMT-25, demonstrating the model’s versatility and strength in handling complicated reasoning tasks.
Fathom performs closely to proprietary models like o4-mini (low), while outperforming open-weight models like o1-mini and o3-mini. In self-consistency tests, which assess stability across repeated queries, the model delivered superior reliability for precision-critical applications and environments. These results underscore Fathom-R1-14B’s potential as a dependable reasoning model for research, enterprise, and high-accuracy deployment scenarios.
Fractal has also created an advanced form, Fathom-R1-14B-RS, that combines reinforcement learning and SFT. This version provides comparable performance with better consistency, all while keeping post-training expenses around $1,000, confirming Fractal’s dedication to making high-quality AI accessible.
Conclusion
Fractal’s release of Fathom-R1-14B is a turning point for India’s AI sector. As the nation’s first large-scale open-source reasoning model, it demonstrates technical prowess and a dedication to accessible AI. Fathom, which aligns with worldwide standards, enables researchers, entrepreneurs, and companies to create logic-driven systems. As India pursues its AI goal through the IndiaAI Mission, such models will be critical in encouraging innovation, collaboration, and digital sovereignty.