DOJ backs xAI in challenge to Colorado’s AI law on discrimination

Colorado AI – The US DOJ is intervening to support xAI in its lawsuit against Colorado’s “high-risk” AI disclosure and mitigation law, arguing it could force discriminatory outputs.
The US Department of Justice has stepped into xAI’s lawsuit against Colorado, siding with the company as it argues the state’s “high-risk” AI law violates the Constitution.
The case zeroes in on Colorado SB24-205. a measure that requires developers of certain high-risk AI systems—such as tools used in healthcare. employment. and housing—to disclose risks and mitigate the possibility of algorithmic discrimination.. Misryoum reports that xAI filed its challenge in early April, and the law is scheduled to take effect in June.. Now the DOJ is asking a federal court to declare the statute unconstitutional. shifting the dispute from a private company-versus-state fight into a matter with national legal stakes.
At the center of xAI’s complaint is a First Amendment argument: Colorado is effectively pressuring developers to alter how they build AI systems and to steer product behavior toward the state’s views on diversity and discrimination.. The DOJ doesn’t dismiss those concerns.. Instead. Misryoum notes the government is emphasizing a different constitutional theory—Equal Protection—arguing that the law’s approach will functionally require developers to make changes that lead to discrimination based on protected traits.
The DOJ’s complaint highlights how Colorado’s framework uses demographics and “statistical disparities” as evidence of discrimination.. That mechanism. the DOJ argues. turns a risk-mitigation requirement into a mandate to reshape outputs in ways that could treat people differently based on race. sex. religion. or other protected characteristics.. In the DOJ’s view, forcing those changes would run directly into the Fourteenth Amendment’s equal protection protections.
Beyond the legal arguments. the move signals how quickly AI regulation has become a proxy battle over governance philosophy—who should set the rules. and what the rules should reward or discourage.. Colorado’s law is one of several attempts at accountability in the real world. where AI systems increasingly influence access to jobs. healthcare decisions. and housing opportunities.. Misryoum readers should expect this kind of dispute to intensify as “high-risk” categories expand and more AI models are embedded in public-facing decisions.
Misryoum also sees an important political layer in the timing.. The administration’s earlier actions around AI policy have shown particular sensitivity to diversity-related constraints.. Executive orders tied to an AI Action Plan have directed agencies toward AI tools that avoid what they describe as “ideological dogmas such as DEI. ” and created momentum for a more centralized. federal approach.. That backdrop matters in court because it shapes how federal officials frame state regulation—either as necessary consumer and civil-rights protection. or as an unconstitutional patchwork.
There is a tension at the heart of this controversy: both sides are arguing from the language of fairness.. The law. as presented by Colorado’s policy framework. aims to prevent discriminatory outcomes by requiring disclosure and mitigation when disparities appear.. The DOJ, supporting xAI, argues that the mitigation method itself could require discriminatory treatment.. Misryoum can’t resolve the factual merits from the pleadings alone. but the structure of the argument points to a common problem in AI governance—how to measure and remediate bias without introducing new forms of harm or compelled behavior.
For builders and deployers of AI, the immediate impact is practical even before any ruling.. A court fight creates uncertainty about compliance timing, documentation expectations, and what “mitigation” looks like in practice.. If parts of Colorado’s law are struck down, companies may gain a clearer path to operating across jurisdictions.. If the law survives. developers could face a growing compliance burden that forces earlier risk audits. more careful model evaluation. and potentially more conservative release strategies.
The broader implication is that the fight will likely influence future state and federal policy design.. Misryoum expects the legal outcome to shape how “high-risk” definitions are written. how statistical disparities are interpreted. and whether mitigation obligations are tailored narrowly enough to withstand constitutional scrutiny.. For now. the case is a reminder that AI regulation isn’t just a technical question about fairness metrics—it’s also a constitutional question about what government can require. and what it cannot compel.