In Elon Musk’s world, AI is the new MD. The X owner is encouraging users to upload their medical test results—such as CT and bone scans—to the platform so that Grok, X’s artificial intelligence chatbot, can learn how to interpret them efficiently.
He’s previously said this information will be used to train X’s artificial intelligence chatbot Grok on how to interpret them efficiently.
Earlier this month, Elon Musk reposted a video on X of himself talking about uploading medical data to Grok, saying: “Try it!”
“You can upload your X-rays or MRI images to Grok and it will give you a medical diagnosis,” Musk said in the video, which was uploaded in June. “I have seen cases where it’s actually better than what doctors tell you.
In 2024, Musk said medical images uploaded to Grok would be used to train the bot.
“This is still early stage, but it is already quite accurate and will become extremely good,” Musk wrote on X. “Let us know where Grok gets it right or needs work.”
Musk also claimed in his response Grok saved a man in Norway by diagnosing a problem his doctors failed to notice. The X owner was willing to upload his own medical information to his bot.
“I did an MRI recently and submitted it to Grok,” Musk said in an episode of the Moonshots with Peter Diamandis podcast released on Tuesday. “None of the doctors nor Grok found anything.”
Musk did not disclose in the podcast why he received an MRI. XAI, which owns X, told Fortune in a statement: “Legacy Media Lies.”
Grok is facing some competition in the AI health space. This week OpenAI launched ChatGPT Health, an experience within the bot feature that allows users to securely connect medical records and wellness apps like MyFitnessPal and Apple Health. The company said it would not train the models using personal medical information.
AI chatbots have become a ubiquitous source of medical information for people. OpenAI reported this week 40 million people seek health information from the model, 55% of which used to bot to look up or better understand symptoms.
Dr. Grok will see you now
So far, Grok’s ability to detect medical abnormalities have been mixed. The AI successfully analyzed blood test results and identified breast cancer, some users claimed. But it also grossly misinterpreted other pieces of information, according to physicians who responded to some of Musk’s about Grok’s ability to interpret medical information. In one instance, Grok mistook a “textbook case” of tuberculosis for a herniated disk or spinal stenosis. In another, the bot mistook a mammogram of a benign breast cyst for an image of testicles.
A May 2025 study found that while all AI models have limitations in processing and predicting medical outcomes, Grok was the most effectively compared to Google’s Gemini and ChatGPT-4o when determining the presence of pathologies in 35,711 slices of brain MRI.
“We know they have the technical capability,” Dr. Laura Heacock, associate professor at the New York University Langone Health Department of Radiology, wrote on X. “Whether or not they want to put in the time, data and [graphics processing units] to include medical imaging is up to them. For now, non-generative AI methods continue to outperform in medical imaging.”
The problems with Dr. Grok
Musk’s lofty goal of training his AI to make medical diagnoses is also a risky one, experts said. While AI has increasingly been used as a means to make complicated science more accessible and create assistive technologies, teaching Grok to use data from a social media platform presents concerns about both Grok’s accuracy and user privacy.
Ryan Tarzy, CEO of health technology firm Avandra Imaging, said in an interview with Fast Company asking users to directly input data, rather than source it from secure databases with de-identified patient data, is Musk’s way of trying to accelerate Grok’s development. Also, the information comes from a limited sample of whoever is willing to upload their images and tests—meaning the AI is not gathering data from sources representative of the broader and more diverse medical landscape.
Medical information shared on social media isn’t bound by the Health Insurance Portability and Accountability Act (HIPAA), the federal law that protects patients’ private information from being shared without their consent. That means there’s less control over where the information goes after a user chooses to share it.
“This approach has myriad risks, including the accidental sharing of patient identities,” Tarzy said. “Personal health information is ‘burned in’ too many images, such as CT scans, and would inevitably be released in this plan.”
The privacy dangers Grok may present aren’t fully known because X may have privacy protections not known by the public, according to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania. He said users share medical information at their own risk.
“As an individual user, would I feel comfortable contributing health data?” he previously told the New York Times. “Absolutely not.”
A version of this story originally published on Fortune.com on Nov. 20, 2024.
More on AI and health:
- OpenAI launches ChatGPT Health in a push to become a hub for personal health data
- OpenAI suggests ChatGPT play doctor as millions of Americans face spiking insurance costs: ‘In the U.S., ChatGPT has become an important ally’
- As Utah gives the AI power to prescribe some drugs, physicians warn of patient risks
The post Elon Musk asked people to upload their medical data to X so his AI company could learn to interpret MRIs and CT scans appeared first on Fortune.




