
Colorado’s SB 24-205, which was passed in May 2024, comes into force in February 2026. The law aims to address algorithmic bias in critical high-risk AI. It demands that developers and deployers do their “reasonable care” to prevent disparate impact on race, age, sex, disability, or other protected characteristics. The law governs spaces such as employment, housing, and health care — where AI-based decisions pose tangible risks. Proponents view it as progress toward equity. Opponents foretell fees and fuzziness. David Sacks, tech investor and podcaster, has emerged as one of its most vocal opponents, cautioning the unintended consequences for U.S. AI innovation and competitiveness.
Regulatory Scope and Industry Concerns
The law defines algorithmic discrimination expansively. A system causing illegal disparate impacts may be in breach, even if inadvertent. Firms operating “high-risk” AI will need to record risk mitigations, test systems, and publish impact assessments. Failure might invite lawsuits or fines.
David Sacks, these obligations are too vague. He says vague terms like “disparate impact” have no clear way of being measured. For him, that means startups will encounter erratic compliance loads. Bigger companies might adjust with lawyers. Smaller firms will go bust under the costs.
Advocates retort that reasonable care is not an unattainable threshold. They cite industries such as finance, where anti-discrimination regulations are already in place. The law, they say, only brings responsibility to machines.
Yet the lack of a national infrastructure makes it tricky. By 2025, over 100 state AI laws will already be on the books. Each has specific definitions and criteria. SACKS cautions that this results in a “patchwork” effect, where it becomes challenging for AI firms to grow across states. He thinks investors might gravitate toward markets with even playing fields, like Europe or parts of Asia. And that begs the question — is the U.S. protecting consumers, or incenting innovation to occur somewhere else?
David Sacks says the danger is not hypothetical. He contends startups already consider compliance in funding rounds. Venture capital, he notes, could instead relocate abroad if the US can’t coordinate regulation. Short of a federal standard, every new state law is more friction.
Fragmentation vs. Federal Approach
Colorado is not alone. California’s AI laws prioritize privacy and ownership. Kentucky’s measures focus on transparency. All three combined display a decentralized U.S. strategy. But detractors claim this patchwork undermines lucidity.
Empirical evidence is thin. No peer-reviewed research indicates a causal connection between regulation and decreasing innovation. Much of the controversy is based on expert opinion. Yet the dread echoes in Silicon Valley. Most founders have war stories about ballooning legal overhead.
A few policymakers identify advantages in state experimentation. States serve as “labs of democracy,” customizing locally needed regulations. But businesses prefer predictability. And AI, by definition, scales globally. That tension drives the conflict.
SACKS puts it in terms of national competitiveness. Will America lead AI, or will regulatory complexity outsource leadership? His perspective is grim: if regulators don’t standardize quickly, U.S. AI risks falling behind.
Conclusion
Colorado’s SB 24-205 is a groundbreaking step in AI oversight. It demands justice in automated decisions, but it also stokes profound anxieties. David Sacks, meanwhile, has put himself at the epicenter of the skepticism, cautioning about shattered regulations and suppressed ingenuity. Advocates view the legislation as long-overdue safeguards for consumers. Critics dread regulatory excess and economic damage. The absence of concrete information leaves the discussion to conjecture, but the issues are significant. As AI regulation proliferates at the state level, the US can either unify under federal standards—or maintain a patchwork that tests both innovation and competitiveness.