For the past three years, an AI-powered tool, created by healthcare megacorp Johnson & Johnson, has been used in operating rooms after gaining FDA approval. AI is embedded in the tools navigation system designed to guide surgeons operating inside a patient’s head. It’s delicate, difficult work, but it was approved on the promise that machine learning would make the procedure safer and more precise, especially over time.
Its record has been mixed at best since.
An exhaustively researched feature in Reuters details the tool’s track record, which thus far has a few successes but is also littered with reports of botched surgeries and misidentified body parts.
The tool in question is the TruDi Navigation System, created by Acclarent, a subsidiary of Integra Lifesciences, which is under the Johnson & Johnson umbrella. The tool didn’t previously use AI, but they’ve jammed AI into it in the hope that it actually does improve the tool.
Before AI was added, the FDA had received seven malfunction reports and one injury. After the update, the agency logged at least 100 reports of malfunctions, including 10 injuries, through late 2025. A lot of the reports sound the same: the system allegedly misinformed surgeons about where their instruments were while operating near critical bodily structures like the carotid artery and the base of the skull.
Two cases in Texas that are now in court involved patients who suffered strokes after arteries were allegedly damaged during routine sinus procedures. One woman required part of her skull to be removed to relieve brain swelling. The lawsuits claim that the AI component made the device less reliable. The companies argue that the FDA reports don’t prove fault.
The most troubling part of the story is that the TruDi system is just one device in a booming field of AI surgical devices. The FDA has authorized more than 1300 AI-enabled medical devices thus far, and a recent joint academic review by Johns Hopkins University, Georgetown, and Yale found that 60 AI-authorized devices were tied to a total of 182 recalls, with nearly half occurring within a year of approval.
The investigation found that reports filed with the FDA also question several other AI medical systems, like prenatal ultrasound software that has allegedly mislabeled fetal anatomy and heart monitors that failed to do their primary job of detecting abnormal rhythms. Manufacturers often say there is no evidence that the AI is to blame or that the patients were directly harmed by malfunctioning AI-powered hardware.
The AI boom is disrupting every industry, more often than not for the worse than for the better. Hospitals are struggling to keep pace, as our regulators are all collectively being asked to oversee technologies that evolve faster than the rules designed to keep them in check.
The post AI-Powered Tools in the Operating Room Is Probably a Bad Idea appeared first on VICE.




