The White House’s AI Action Plan, released in July, mentions “health care” only three times. But it is one of the most consequential health policies of the second Trump administration. Its sweeping ambitions for AI—rolling back safeguards, fast-tracking “private-sector-led innovation,” and banning “ideological dogmas such as DEI”—will have long-term consequences for how medicine is practiced, how public health is governed, and who gets left behind.
Already, the Trump administration has purged data from government websites, slashed funding for research on marginalized communities, and pressured government researchers to restrict or retract work that contradicts political ideology. These actions aren’t just symbolic—they shape what gets measured, who gets studied, and which findings get published. Now, those same constraints are moving into the development of AI itself. Under the administration’s policies, developers have a clear incentive to make design choices or pick data sets that won’t provoke political scrutiny.
These signals are shaping the AI systems that will guide medical decision making for decades to come. The accumulation of technical choices that follows—encoded in algorithms, embedded in protocols, and scaled across millions of patients—will cement the particular biases of this moment in time into medicine’s future. And history has shown that once bias is encoded into clinical tools, even obvious harms can take decades to undo—if they’re undone at all.
AI tools were permeating every corner of medicine before the action plan was released: assisting radiologists, processing insurance claims, even communicating on behalf of overworked providers. They’re also being used to fast-track the discovery of new cancer therapies and antibiotics, while advancing precision medicine that helps providers tailor treatments to individual patients. Two-thirds of physicians used AI in 2024—a 78 percent jump from the year prior. Soon, not using AI to help determine diagnoses or treatments could be seen as malpractice.
At the same time, AI’s promise for medicine is limited by the technology’s shortcomings. One health-care AI model confidently hallucinated a nonexistent body part. Another may make doctors’ procedural skills worse. Providers are demanding stronger regulatory oversight of AI tools, and some patients are hesitant to have AI analyze their data.
The stated goal of the Trump administration’s AI Action Plan is to preserve American supremacy in the global AI arms race. But the plan also prompts developers of leading-edge AI models to make products free from “ideological bias” and “designed to pursue objective truth rather than social engineering agendas.” This guidance is murky enough that developers must interpret vague ideological cues, then quietly calibrate what their models can say, show, or even learn to avoid crossing a line that’s never clearly drawn.
Some medical tools incorporate large language models such as ChatGPT. But many AI tools are bespoke and proprietary and rely on narrower sets of medical data. Given how this administration has aimed to restrict data collection at the Department of Health and Human Services and ensure that those data conform to its ideas about gender and race, any health tools developed under Donald Trump’s AI action plan may face pressure to rely on training data that reflects similar principles. (In response to a request for comment, a White House official said in an email that the AI plan and the president’s executive order on scientific integrity together ensure that “scientists in the government use only objective, verifiable data and criteria in scientific decision making and when building and contracting for AI,” and that future clinical tools are “not limited by the political or ideological bias of the day.”)
Models don’t invent the world they govern; they depend on and reflect the data we feed them. That’s what every research scientist learns early on: garbage in, garbage out. And if governments narrow what counts as legitimate health data and research as AI models are built into medical practice, the blind spots won’t just persist; they’ll compound and calcify into the standards of care.
In the United States, gaps in data have already limited the perspective of AI tools. During the first years of COVID, data on race and ethnicity were frequently missing from death and vaccination reports. A review of data sets fed to AI models used during the pandemic found similarly poor representation. Cleaning up these gaps is difficult and expensive—but it’s the best way to ensure the algorithms don’t indelibly incorporate existing inequities into clinical code. After years of advocacy and investment, the U.S. had finally begun to close long-standing gaps in how we track health and who gets counted.
But over the past several months, that type of fragile progress has been deliberately rolled back. At times, CDC web pages have been rewritten to reflect ideology, not epidemiology. The National Institutes of Health halted funding for projects it labeled as “DEI”—despite never defining what that actually includes. Robert F. Kennedy Jr. has made noise about letting NIH scientists publish only in government-run journals, and demanded the retraction of a rigorous study, published in the Annals of Internal Medicine, that found no link between aluminum and autism. (Kennedy has promoted the opposite idea: that such vaccine ingredients are a cause of autism.) And a recent executive order gives political appointees control over research grants, including the power to cancel those that don’t “advance the President’s policy priorities.” Selective erasure of data is becoming the foundation for future health decisions.
American medicine has seen the consequences of building on such a shaky foundation before. Day-to-day practice has long relied on clinical tools that confuse race with biology. Lung-function testing used race corrections derived from slavery-era plantation medicine, leading to widespread underdiagnosis of serious lung disease in Black patients. In 2023, the American Thoracic Society urged the use of a race-neutral approach, yet adoption is uneven, with many labs and devices still defaulting to race-based settings. A kidney-function test used race coefficients that delayed specialty referrals and transplant eligibility. An obstetric calculator factored in race and ethnicity in ways that increased unnecessary Cesarean sections among Black and Hispanic women.
Once race-based adjustments are baked into software defaults, clinical guidelines, and training, they persist—quietly and predictably—for years. Even now, dozens of flawed decision-making tools that rely on outdated assumptions remain in daily use. Medical devices tell a similar story. Pulse oximeters can miss dangerously low oxygen levels in darker-skinned patients. During the COVID pandemic, those readings fed into hospital-triage algorithms—leading to disparities in treatment and trust. Once flawed metrics get embedded into “objective” tools, bias becomes practice, then policy.
When people in power define which data matter and the outputs are unchallenged, the outcomes can be disastrous. In the early 20th century, the founders of modern statistics—Francis Galton, Ronald Fisher, and Karl Pearson—were also architects of the eugenics movement. Galton, who coined the term eugenics, pioneered correlation and regression and used these tools to argue that traits like intelligence and morality were heritable and should be managed through selective breeding. Fisher, often hailed as the “father of modern statistics,” was an active leader in the U.K.’s Eugenics Society and backed its policy of “voluntary” sterilization of those deemed “feeble-minded.” Pearson, creator of the p-value and chi-squared tests, founded the Annals of Eugenics journal and deployed statistical analysis to argue that Jewish immigrants would become a “parasitic race.”
For each of these men—and the broader medical and public-health community that supported the eugenics movement—the veneer of data objectivity helped transform prejudice into policy. In the 1927 case Buck v. Bell, the Supreme Court codified their ideas when it upheld compulsory sterilization in the name of public health. That decision has never been formally overturned.
Many AI proponents argue concerns of bias are overblown. They’ll note that bias has been fretted over for years, and to some extent, they’re right: Bias was always present in AI models, but its effects were more limited—in part because the systems themselves were narrowly deployed. Until recently, the number of AI tools used in medicine was small, and most operated at the margins of health care, not at its core. What’s different now is the speed and the scale of AI’s expansion into this field, at the same time the Trump administration is dismantling guardrails for regulating AI and shaping these models’ future.
Human providers are biased, too, of course. Researchers have found that women’s medical concerns are dismissed more often than men’s, and some white medical students falsely believe Black patients have thicker skin or feel less pain. Human bias and AI bias alike can be addressed through training, transparency, and accountability, but the path for the latter requires accounting for both human fallibility and that of the technology itself. Technical fixes exist—reweighing data, retraining models, and bias audits—but they’re often narrow and opaque. Many advanced AI models—especially large language models—are functionally black boxes: Using them means feeding information in and waiting for outputs. When biases are produced in the computational process, the people who depend on that process are left unaware of when or how they were introduced. That opacity fuels a bias feedback loop: AI amplifies what we put in, then shapes what we take away, leaving humans more biased for having trusted it.
A “move fast and break things” rollout of AI in health care, especially when based on already biased data sets, will encode similar assumptions into models that are enigmatic and self-reinforcing. By the time anyone recognizes the flaws, they won’t just be baked into a formula; they’ll be indelibly built into the infrastructure of care.
The post The Trump Administration Will Automate Health Inequities appeared first on The Atlantic.