
In a strategic move to assert dominance in China’s rapidly changing artificial intelligence (AI) market, Baidu has launched two new sophisticated AI models that deliver high-end performance at significantly lower costs, over 60% less than competing DeepSeeks. The announcement was made on April 25 during Baidu’s annual developer conference, amid fierce competition between local tech titans vying for control in the AI sector.
Cutting Costs, Not Capability
According to Reuters, Baidu has launched a powerful new AI computing cluster featuring 30,000 of its third-generation Kunlun P800 chips. The cluster is now operational and ready for training large models like DeepSeek. Announced by co-founder and CEO Robin Li at the company’s developer conference, the upgrade supports Baidu’s push to enhance AI performance and accessibility. The company also unveiled two advanced models; Ernie 4.5 Turbo and Ernie X1 Turbo, showcasing its integrated approach to high-efficiency hardware and effective software development.
Ernie X1 Turbo, which is optimized for high-level thinking jobs such as search, scripting, and question answering, is modestly priced at $0.28 per million input tokens and $1.10 per million output tokens. These costs are much cheaper than similar propositions like DeepSeek’s R1 model, which is about two times more costly, as AI News reports.
The multimodal Ernie 4.5 Turbo, which can comprehend and produce text, images, audio, and video, features advanced technologies like FlashMask dynamic attention and a Mixture-of-Experts (MoE) architecture. Baidu says it internally surpasses OpenAI’s GPT-4.5 and is cost-effective at $0.55 per million input tokens and $2.20 per million output tokens, or half the cost of GPT-4.5.
Baidu has significantly enhanced its AI offerings, placing the ERNIE 4.5 Turbo and X1 Turbo models as cost-effective substitutes for rival large language models (LLMs). The ERNIE 4.5 Turbo is approximately 40% cheaper than DeepSeek’s V3 model, while the X1 Turbo is only 25% the price of DeepSeek’s R1 reasoning model, according to Baidu CEO Li. Moreover, Baidu’s ERNIE 4.5 Turbo outperformed OpenAI’s GPT-4o in a set of benchmark tests measuring text processing and multimodal capability. Specifically, ERNIE 4.5 Turbo averaged a score of 77.68, versus GPT-4’s 72.76 in the same tests.
At Baidu’s developer conference in Wuhan, Baidu’s CEO Li emphasized that reducing the cost of AI development is essential to driving innovation. “The essence of innovation is the lowering of cost,” Li stated, indicating that this frees up developers from worrying about high basic model prices so that they can concentrate on building interesting and substantive applications instead.
Hardware Power Behind the Models
Baidu has significantly expanded its AI infrastructure with the rollout of a powerful cluster of 30,000 Kunlun P800 processors. This high-performance design enables the concurrent training of large-scale models with hundreds of billions of parameters, as well as fine-tuning smaller models by thousands of users. The P800 cluster not only enhances computational performance but also lowers the operational costs associated with advanced AI development.
As per Baidu’s CEO, this robust chip infrastructure provides the computational foundation necessary to execute models of DeepSeek’s size and complexity. Li said that these advancements allow Baidu to stay ahead of the competition by delivering excellent performance at a fraction of the typical cost.
Charlie Dai, vice-president and principal analyst at Forrester stated that Baidu’s advancements in multimodal LLMs and multi-agent apps will drive AI adoption in China and ease developer entry.