DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

April 10, 2026
in News
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Meta’s Superintelligence Labs launched its first generative AI model, called Muse Spark, earlier this week. It is currently available through the Meta AI app, but the company plans to integrate Muse Spark across all of its platforms—including Facebook, Instagram, and WhatsApp—in the coming weeks.

Meta claims that Muse Spark was designed, in part, to be better at answering questions people have about their health. The company even worked with “over 1,000 physicians to curate training data that enables more factual and comprehensive responses,” according to Meta’s announcement blog.

As the new model rolls out to millions of users, I tested Muse Spark to see how it would respond to health-related questions. When I asked how it could help me, the bot listed off a few basic uses, like building a workout routine or generating questions to ask my doctor, but a direct request for my health data stood out:

“Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I’ll calculate trends, flag patterns, and visualize them,” read the Meta AI output. “Example: ‘Here are my last 10 blood pressure readings—is there a pattern?’”

Nudging users to upload their health data is not unique to Meta. OpenAI’s ChatGPT and Anthropic’s Claude both have chatbot modes designed specifically for helping users understand their health and make decisions. For example, you can open Claude and connect it to your Apple or Android health data with just the flip of an in-app toggle. Then, Claude will use that information as part of its answers. Google also lets you upload medical data to Fitbit for its AI health coach to parse.

Handing over this kind of data to any AI tool is a risky decision, even if users are able to generate personalized advice. “Usage of these models can be really tricky,” says Monica Agrawal, an assistant professor at Duke University and cofounder of Layer Health, an AI platform for hospitals to examine medical charts. “The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections.”

Agrawal is concerned about users uploading sensitive data to chatbots since these commonly used AI tools are not compliant with HIPAA protections, the landmark US law that guards patients from having their sensitive health information exposed. Layer Health is HIPAA compliant. It’s a high standard of privacy that people are used to experiencing during doctor visits. The information someone shares with a bot is much more loosely regulated, even if it’s their clinical lab result.

Anything you share in a chat with Meta AI may be stored and used to train future AI models. “We keep training data for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely, and efficiently,” reads Meta’s privacy policy about generative AI. Meta has also stated it may tailor advertisements for users based on their interactions with the AI features.

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

Chatting with a bot can feel like an intimate, personal affair, even when it isn’t. Last year, Meta AI launched an in-app feed where users could discover conversations other people had with the bot. Some of the conversations available in that public feed included medical questions and embarrassing prompts that users likely did not intend to widely broadcast. Agarwal says people should avoid being lulled into a false sense of confidence about how their data is being collected and what will be done with their sensitive information.

“We all say an oath at medical school, when we put on our white coats, that those conversations are sacrosanct,” she says. “These bots aren’t taking those oaths.”

The post Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice appeared first on Wired.

Russia’s air force is much more dangerous now than it was before it invaded Ukraine, airpower experts warn
News

Russia’s air force is much more dangerous now than it was before it invaded Ukraine, airpower experts warn

by Business Insider
April 10, 2026

Russia's air force has gained vital experience in its war with Ukraine. Russian Ministry of Defense/Anadolu via Getty ImagesRussia's air ...

Read more
News

Melania’s ‘poorly worded’ Epstein denial leaves Trump advisor shook: ‘What is she doing?’

April 10, 2026
News

U.S. missile burn rate in Iran leaves the Pacific cupboard bare

April 10, 2026
News

With Iran Setting Limits, Strait of Hormuz Remains Thorny Politically

April 10, 2026
News

‘Charlie’s Angels’ soars to a golden milestone

April 10, 2026
See You in Pyongyang: Russia Pushes Its People to Embrace North Korea

See You in Pyongyang: Russia Pushes Its People to Embrace North Korea

April 10, 2026
Vance Faces a High-Profile Test of His Negotiating Skills With Iran Talks

Vance Faces a High-Profile Test of His Negotiating Skills With Iran Talks

April 10, 2026
From Mainstream to MAGA in Two Easy Steps

From Mainstream to MAGA in Two Easy Steps

April 10, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026