New health research publishes every day — some compelling, some preliminary and some of it leading to confusion and contradictions. One study proclaims that coffee will help you live longer; another suggests you should throw out that latte immediately. One paper seems to unveil a miracle drug, but if you look at the fine print, you notice there were only a handful of participants.
You can find a paper to back up practically any argument — and so much of the medical misinformation that travels online mis-characterizes or misinterprets evidence. As a health reporter at The New York Times, it’s my job to sort through all of it, and determine which studies are worth your attention.
I keep a number of factors in mind: Is the research well supported? Is a paper advancing or challenging our understanding of a topic? Is a study telling us something new? How may its findings affect the lives of our readers?
Reporting on research is an important part of my work. Some studies may show us if a promising new drug is effective or not. Other studies can help guide our day-to-day decisions about our health: how much alcohol we drink, how much we move throughout the day or even what we eat.
What makes a good study?
Whenever possible, we’re looking for data from the gold standard of scientific research: double-blind randomized controlled trials, in which one group of people receives a treatment or intervention and the other group receives a placebo. Neither the researchers nor the study participants know which group is which. This helps protect against bias, preventing researchers from treating one group differently from the other.
The size and duration of the study also matters. In general, the larger the sample size and longer the study, the sturdier the data. I also look for studies published in well-regarded medical journals that have established processes to vet and review trials. These studies are peer reviewed, which means outside experts have independently vetted a paper before it publishes.
How do Times reporters vet studies?
One of the primary factors in determining whether we report on a study is the quality of the data. While preclinical trials — studies done on cells in petri dishes, or on animals like mice — can provide valuable insights, I prioritize studies that have been conducted on humans; a drug that seems groundbreaking in a mouse may only have a mild effect, or none at all, on a person.
I pay close attention to how studies are designed. What was the protocol for the experiment? Who took part in the trial? Are the participants largely one gender or from one ethnic group, or are they reflective of the broader population? How long did the study last?
I communicate these points clearly, and near the start of a story. And when a trial is the first randomized controlled experiment on a given issue — as was the case for a study I reported on last winter on Ozempic and alcohol consumption — I make sure the reader knows.
I prioritize Phase 3 clinical trials, the third and largest stage of a clinical trial, and the final step before companies ask the Food and Drug Administration to approve a new drug or treatment.
But not every study can be a clinical trial. Most of the research on ultraprocessed foods, for example, comes from observational studies, in which people recount their diets (rather than have scientists watch them eat).
Observational studies still offer useful clues. I’ve reported on research that suggests people with mental health conditions are at higher risk of getting very sick from Covid, for example, a finding that advances our understanding of how the virus affects certain populations. But I took care to note that mental health issues were associated with worse Covid illnesses, not that those conditions necessarily caused those outcomes.
I make sure that readers know, often in the first few paragraphs of an article, if a study was observational, and I note that correlation does not always mean causation.
I strive to put every study into context. I will explain if a new paper adds to research on a given topic — like the connection between alcohol and poor health outcomes — or if it is one of the first studies to show something.
What about conflicts of interest within research?
Scientific journals have several measures in place to guard against conflicts of interest or bias. Every paper includes a section outlining conflicts of interest or disclosures from the researchers — and that’s usually the first place I turn to when I am evaluating whether or not to cover a study.
There are all sorts of issues that can introduce the potential for bias in research. A paper might have been funded by a company with a financial interest in certain results. (This is almost always the case for drug studies, which are so costly to run that typically only pharmaceutical companies that stand to make a profit can afford to take the risk to fund them.) The study’s authors might have their own conflicts of interest, such past consulting work for companies that profit from certain therapies or treatments.
None of these factors inherently mean the findings are bad or can’t be trusted. But I make sure to note, clearly, if a study is funded by a person or organization that might have a stake in the outcomes.
I also make sure that outside experts have evaluated the studies I write about. In my article, I also speak with the study’s researchers, as well as experts who were not involved in the trial, to gather their perspectives.
What doesn’t the data tell us?
Every paper comes with caveats, and we include them in our articles.
I sometimes write about studies that are attracting a lot of attention but may be misinterpreted or create unwarranted panic. Last summer, for example, I saw a number of alarming headlines from other media organizations about a paper on heavy metals in dark chocolate. But when I spoke with the lead author on that study, he stressed that the average consumer did not have to worry about nibbling the occasional square. I put that caveat high up in the story that I wrote.
As much as I try to clearly convey what a study finds, it is just as important to show what a study is missing. I recently reported on growing evidence that suggests vaping harms your health, but I noted that it is likely to take decades before we understand the whole picture, since possible outcomes like cancers take time to develop.
Science moves both slowly and quickly: Every day brings a new blitz of studies. But high-quality research takes time — and when it does come out, we’re ready to dig in.
Dani Blum is a health reporter for The Times.
The post How Do We Decide Which Studies to Cover? appeared first on New York Times.