
On July 23, President Donald Trump signed a sweeping executive order titled “Preventing Woke AI in the Federal Government Act.” It’s yet another volley in the ongoing political culture war, and a deliberate attempt to erase terms like diversity, equity, inclusion (DEI) and roll back the work of those addressing systemic racism in federal artificial intelligence systems.
[time-brightcove not-tgx=”true”]
But for those of us like myself in medicine, especially those advocating for health equity, this isn’t just political posturing. This order threatens lives. It jeopardizes years of work to identify and correct structural biases that have long harmed marginalized communities, particularly Black Americans.
AI is transforming healthcare. It’s already being used to triage emergency room patients, prioritize follow-up care, and predict disease risk. But these algorithms don’t arise from neutral ground. They are trained on real-world data. Data that is anything but unbiased.
Protecting medical accuracy
One of the most striking examples came in a 2019 study published in Science written by researchers from UC Berkeley and the University of Chicago. They examined a widely used commercial healthcare algorithm designed to flag patients for high-risk care management. On the surface, it appeared objective and data-driven. But researchers discovered that the algorithm wasn’t assessing clinical need at all. Instead, it was quietly using a proxy: the amount of money previously spent on a patient’s care.
Because Black patients typically receive less care, even when presenting with the same symptoms, that spending proxy led the algorithm to drastically underestimate their need. While nearly 46.5% of Black patients should have been flagged for additional care, the algorithm identified only 17.7%. That’s not a statistical footnote. That’s a system that has been taught to look the other way
This isn’t an isolated case. Consider two other race-adjusted algorithms still used today:
Kidney function, which is calculated using Glomerular Filtration Rate (GFR) equations, have long included a “correction factor” for Black patients, based on unscientific assumptions about muscle mass. Researchers have repeatedly found that this adjustment inflated kidney scores, meaning many Black patients were deemed ineligible for transplants or delayed in receiving specialty care.
And Pulmonary Function Tests (PFTs), used to diagnose asthma and lung diseases, often apply a race-based correction that assumes Black people naturally have lower lung capacity, lowering detection thresholds and contributing to underdiagnosis.
These aren’t just historical artifacts. They are examples of how racism can become embedded in code. Quietly, pervasively, and lethally.
In recent years, clinicians and researchers like myself have pushed back. Many hospitals are removing race-based corrections from medical equations. Equity-centered AI tools are being developed to detect and mitigate disparities, not ignore them. This work isn’t about being “woke.” It’s about being accurate, improving outcomes, and saving lives.
The danger of Trump’s anti-woke culture war
Trump’s executive order threatens to shut down the important work that has been done to make medical algorithms more accurate.
By banning federal agencies from considering systemic racism or equity in AI development, the order effectively outlaws the very efforts needed to fix these problems. It silences the data scientists trying to build and foster a fairer system. It tells us that naming inequality is worse than perpetuating it.
Supporters of the order claim it promotes “neutrality.” But neutrality, in a system built on inequity, is not justice. It’s reinforcement of the very biases it pretends to ignore.
The danger isn’t hypothetical. Black patients are already less likely to be offered pain medication, more likely to be misdiagnosed, and more likely to die from preventable conditions. Ethically designed AI could help surface these disparities earlier. But only if we’re allowed to build it that way.
And bias in AI doesn’t just harm Black communities. Studies have shown facial recognition systems misidentify women and people of color at far higher rates than white men. In one case, an algorithm used in hiring systematically downgraded résumés from women. In another, a healthcare tool underestimated the risk of heart disease in women because historical data underdiagnosed them in the first place. This is how inequality replicates. Biased inputs becoming automated decisions without scrutiny or context.
Erasing DEI from AI isn’t about neutrality. It’s about selective memory. It’s an attempt to strip away the language we need to diagnose the problem, let alone fix it. If we force AI to ignore history, it will rewrite it. Not just the facts, but the people those facts represent.
Trump’s executive order politicizes and weaponizes AI. And for millions of Americans already unseen by our legal, medical, and technological systems, the cost will be measured in lives.
The post Trump’s Anti-Woke AI Policy Puts Patients’ Lives at Risk appeared first on TIME.



