
An investigation by ProPublica reveals that the Trump administration’s rushed deployment of AI to evaluate Veterans Affairs contracts has led to alarming errors. Tasked with flagging “waste,” the AI misclassified essential services, like internet access and safety equipment maintenance, as expendable. The flawed system, created under time pressure using an outdated AI model, was built on unclear instructions and ignored critical contract data. Experts warn the system lacks the context and technical nuance required for such high-stakes decisions, putting vital services for veterans at risk under the guise of efficiency.
Inside the Botched System, What Went Wrong with VA’s AI Contract Review
Software engineer Sahil Lavingia developed the AI model for the Department of Government Efficiency (DOGE) to determine which VA contracts were “munchable,” or cancelable. However, the team built it using a general-purpose language model with limited capacity, only analyzing the first 10,000 characters of contract documents. Lacking critical training and oversight, the AI hallucinated data, misreporting over 1,000 contracts as being worth $34 million each, and misunderstood the purpose of others. It also misidentified services like EMR systems or ceiling lift maintenance as non-essential simply because they weren’t directly tied to “bedside care.”
The criteria for cancellation were vague and poorly defined, especially regarding what constituted “direct patient care.” AI-generated judgments were based more on assumptions than concrete information. Experts from universities and former federal IT leaders described the system as dangerous, flawed, and incapable of evaluating contract value or function accurately. Even critical audit and compliance contracts were often marked for termination. The result: a flood of potentially damaging recommendations, driven by an AI system with no understanding of how the DOGE actually operates. ProPublica’s publication of the prompts and code now sheds light on the recklessness behind this cost-cutting initiative.
Policy Overreach, How Politics Shaped AI’s Failures?
The AI script embedded Trump administration political priorities, such as dismantling DEI programs, into its logic without defining those terms or verifying relevance. As a result, the model inconsistently flagged contracts related to diversity, healthcare technology, data analysis, and even veteran housing inspections. This politicized approach led the AI to favor “hard” services like direct nursing while dismissing “soft” administrative functions as waste, even when they supported critical VA operations.
Instructions told the AI to classify jobs like customer support, recruiting, IT, and even content creation as “easily insourced,” despite DOGE hiring freezes. This logic marked more than 500 contracts as cancelable. Experts noted this could lead to more inefficiency, not less, by forcing the VA to stretch thin staff or abandon services altogether. Moreover, the prompt made sweeping assumptions, such as all consulting or modernization services being non-essential, without context.
The root problem, critics argue, lies in deploying an AI model with political bias, unclear definitions, and poor understanding of federal systems. While the VA says its staff reviewed the AI’s output, the fact remains: the AI lacked the accuracy and judgment needed for complex procurement decisions. Analysts fear this sets a dangerous precedent for using automated tools in public service administration without transparency, oversight, or ethical safeguards.
Warnings Mount, Experts Urge Accountability and Oversight
Experts say the VA’s flawed AI deployment illustrates a broader danger of over-relying on automated systems for public policy. Misclassified contracts, vague prompts, and hallucinated figures show that even well-intentioned AI tools can do harm without proper oversight. Critics emphasize the need for ethical AI governance, transparency, and domain-specific training before allowing such systems to impact real lives. While DOGE leaders defend the AI pilot as a cost-saving innovation, analysts insist reforms must come fast. Left unchecked, this case may signal a troubling shift toward using AI not for better governance, but for reckless austerity.