From Detoxes to Sugar Pills: How to Spot Bad Science Before It Hurts You

Health advice is louder than ever. Here's how to tell what's real and what's just noise.

6/25/20259 min read

A Cold Plunge Feels Amazing But That Doesn't Make It Science


Cold plunges are having a moment, and I'll admit, I'm on board. It's become a morning ritual: shock, breath, clarity. I love how it makes me feel: energized, in a good mood, ready to tackle the day. And honestly, that's reason enough to keep doing it.

But I'm also clear about what it is and what it isn't. While some short-term benefits are supported by early studies (reduced inflammation, improved alertness), there's still precious little evidence of any long-term, clinically significant health impact. It's a boost, not a breakthrough. Here's the thing though: when something makes us feel this good, it's easy to let that feeling convince us it must be doing more than it actually is. That gap between how something feels and what it actually does? That's where a lot of health misinformation lives.

I've been here before. Years ago, it was juice cleanses. I thought I could detox with kale water. But your body already has a detox system. It's your liver, kidneys, lungs, and skin. And yes, everything labelled detox in my opinion is a scam. So no green potion required. A great article in the New York Times puts it bluntly: fruit juice is not a health food. A 12-ounce glass of orange juice contains as much sugar as a can of Coke. It's not even close to eating whole fruit. Juice lacks fiber, spikes blood sugar, and leaves you hungry an hour later.

That's the thing about health claims: the ones that stick are usually the ones that sound the best. But they're not always the ones that stand up to scrutiny.

Think about all the things we used to "know": the sugar rush in kids, the idea that milk builds unbreakable bones, the rule about drinking eight glasses of water a day, or that carrots and spinach can give you superhero vision or iron levels. All widely repeated. All thoroughly debunked.

From Pharma Backrooms to Public Headlines

I used to work in biopharmaceutical logistics, which gave me a unique window into how clinical trials are run and spun. Conversations with medical directors made one thing clear: the path from study data to public news is rarely straight.

A great example? In January 2023, University College London published a glowing headline: "New precision therapy for bile duct cancer extends patients' lives." The drug was futibatinib (don't ask me to pronounce it), and the article highlighted its effectiveness in shrinking tumours by over 40%, based on the FOENIX-CCA2 trial. It was hailed as a breakthrough.

But in March 2025, the FDA issued a warning to the drug's maker, Taiho Oncology, for misleading claims. The trial was a single-arm, open-label study with no control group, meaning we had no way to know if the drug caused the improvement or if patients just got lucky. And because everyone involved knew they were getting the drug, it increased the chance of bias. People may have expected to feel better and reported outcomes more positively, especially for things that are hard to measure objectively like a tumour response.

The FDA flagged the survival data (21.7 months) and the 83% disease control rate (DCR) as scientifically uninterpretable. DCR refers to the percentage of patients whose cancer either shrinks or remains stable for a certain period. But without a control group, it's impossible to know whether those outcomes were due to the drug or would have happened anyway. Presenting it without that context made the drug look more effective than it might truly be. Taiho had to remove its claims.

This wasn't fraud. It was just science stretched too thin and reported without context.

How Germany Made Sugar Pills Official Medicine

I was born in Germany, where homeopathy is as deeply rooted as bread and bureaucracy. Multiple comprehensive reviews and meta-analyses of homeopathy study data concluded that homeopathy’s benefits were indistinguishable from placebo. Despite no clinical evidence for its efficacy, homeopathy remains state-supported and even reimbursed by statutory health insurers. Why? Because of public support and political inertia.

Arnica Montana, one of homeopathy's poster children, an herbal remedy promoted for bruising and inflammation, has been debunked by systematic reviews, showing no better outcomes than placebo. Still, in 2025, over 200,000 citizens signed a petition defending its place in the healthcare system. It's our cultural equivalent to keeping the autobahn speed-limit-free, a national quirk that feels sacred. In many ways, it mirrors America's refusal to implement stricter gun laws: emotionally charged, politically entrenched, and resistant to reason. Sometimes tradition wins over evidence, no matter how clear the data.

What makes this more than just a quirky national habit is how it seeps into real-world healthcare decisions. When a treatment like homeopathy is legitimized by insurance systems and medical education, it creates confusion for patients and opens the door for other pseudoscientific approaches to gain traction. A parent might choose a homeopathic remedy over a proven paediatric treatment. A patient with chronic pain may delay trying effective options because they've been nudged toward something that only works as a placebo. In this way, homeopathy becomes more than harmless, it's a template that erodes the boundary between rigorous medicine and feel-good mythology.

What Makes a Clinical Study Worth Believing?

Let's build your toolkit for cutting through the noise. Here's what makes a study worth trusting:

  • Is it randomized? Randomized controlled trials (RCTs) are the gold standard. Anything else (observational, single-arm) requires caution.

    Example: The FOENIX-CCA2 trial on futibatinib wasn’t randomized. Failing to randomly assign participants to treatment and placebo groups can seriously skew results and inflate perceived benefits.

  • Is there a control group? If not, there's no way to know what would have happened otherwise.

    Example: The FOENIX-CCA2 trial was a single-arm clinical trial. It’s a study design in which all participants receive the same experimental treatment, with no parallel control or comparison group. Its main limitation is the inability to distinguish treatment effects from natural disease progression, placebo effects, or bias, making it difficult to draw definitive conclusions about the intervention’s true efficacy and safety.

  • How big is the sample? Small studies tell us very little about what happens in the real world. Look for trials with at least a few hundred participants.

    Example: Many Arnica Montana clinical trials studying its efficacy frequently involved fewer than 100 participants, making their findings statistically weak and difficult to generalize. The Phase 2 study of the FOENIX-CCA2 trial only involved 103 patients.

  • What kind of result are they measuring? Some studies focus on indirect signs that a treatment is working. For example, whether a tumour shrinks. These are called surrogate outcomes. But they don't always translate into real-world results like longer life or lower risk of serious illness, which are far more meaningful.

    Example: The FOENIX-CCA2 trial reported tumour shrinkage as a key success metric, but didn't demonstrate improved survival in a controlled way because it lacked a comparison group.

  • Was the study blinded or double-blind (best)? In clinical studies, "blinded" means that participants (and sometimes other parties) do not know which treatment they are receiving, while "double-blind" means that both the participants and the researchers or clinicians administering the treatment are unaware of which treatment each participant is receiving, to minimize bias and ensure objective results.

    Example: Everyone in the single-arm, open-label Phase 2 FOENIX-CCA2 study including patients and investigators (clinical trial lingo for doctors) knew that participants were receiving the experimental drug. That’s what “open-label” means. This can introduce bias particularly when outcomes are based on subjective judgements.

  • Is the benefit relative or absolute? A 50% relative risk reduction sounds amazing. But if your absolute risk goes from 2% to 1%, that's only a 1% difference.

    Example: Futibatinib's survival data was presented without clear context. The FDA later criticized these claims for overstating the benefit relative to the evidence provided.

  • Who funded the study? It's not always a red flag, but pharma-sponsored studies deserve closer scrutiny.

    Example: The FOENIX-CCA2 trial was sponsored by Taiho Oncology, who also published promotional claims about the drug's benefits on their healthcare provider website. The FDA later deemed these claims misleading.

  • Has it been peer-reviewed or published? Look it up on PubMed or ClinicalTrials.gov to see where the trial lives.

    Example: Many Arnica studies are published in journals with less rigorous peer review, or don't appear in major indexed databases at all.

  • Has it been replicated? One study is never enough. Real breakthroughs survive multiple trials.

    Example: Arnica Montana trials have generally not been replicated in large-scale, well-controlled studies. The few attempts that exist vary significantly in quality and results, making reliability a major concern. When promising early results can't be consistently reproduced, that's often a sign the initial excitement was premature.

How to Trace Claims Back to Real Studies

Here’s a quick, practical checklist I use when a health claim sounds too good to be true. Let’s walk through an example: you see a headline that says, “Green tea reduces heart disease risk by 30%.” Sounds promising—but is it legit?

  1. Find the source.
    Does the article mention a specific study or trial? If yes, note the title or trial ID. If not, search using keywords like “green tea heart disease clinical trial.”

  2. Search for the study.
    Use PubMed, ClinicalTrials.gov, Google Scholar, or the Cochrane Library.

  3. Check the study design.
    Was it a randomized, double-blind controlled trial? Or something less rigorous, like an open-label or observational study?

  4. Assess the sample size.
    Was the study done on 20 people or 20,000? Bigger samples mean more reliable results.

  5. Examine what was measured.
    Did the researchers track actual heart disease events? Or just cholesterol levels (a surrogate marker)?

  6. Understand the risk reduction.
    A 30% reduction sounds big—but is it relative or absolute? Going from 10% to 7% is helpful. Going from 0.2% to 0.14%? Less so.

  7. Check who funded it.
    Was the research backed by a green tea supplement company? That doesn’t make it invalid, but it’s worth noting.

  8. Look for independent commentary.
    Google the claim with words like “skeptic,” “debunked,” or “criticism” to see how experts are responding.

  9. Beware of red flags.
    Hype terms like “breakthrough” or “miracle cure” usually don’t show up in real science.

  10. Watch your biases.
    Confirmation bias (the tendency to favor evidence that supports your beliefs) , the Dunning-Kruger effect (overestimating your understanding of complex topics), and post hoc reasoning (X happens after Y, so Y caused X) can all skew how we interpret health claims.

Peter Attia has written an excellent breakdown of how statistics are often misunderstood in medicine. His "Studying Studies" series unpacks concepts like relative vs. absolute risk, observational bias, and the real meaning of statistical significance—all explained through clear examples and sharp, skeptical thinking.

Another fantastic book on the matter is Bad Science by Ben Goldacre. This article is based on the notes I took reading Peter Attia’s series and the book.

Why This Stuff Matters

We are living through an era of medical individualism. I've written before that you shouldn't outsource your health because no one cares more than you do. But if we're all meant to be CEOs of our own bodies, we need better dashboards. That means learning how to read a study, or at least how to smell when something's off.

Because misreported science isn't just a media problem. It changes policy. It sells pills. It makes people distrustful of real medicine when the snake oil wears off as we can see in the US right now, where vaccine skepticism promoted by figures like RFK Jr. has contributed to measles outbreaks in communities where the disease had been virtually eliminated. When people lose faith in one piece of medical advice (even if it was never good advice to begin with), that skepticism can spread to treatments that actually work.

Now, to be clear: mainstream medicine isn't perfect either. Pharmaceutical companies have suppressed negative studies, medical consensus has been spectacularly wrong (remember when we thought dietary fat was the enemy?), and the healthcare system often prioritizes profit over patients. The goal isn't blind trust in authority, it's developing the tools to think critically about all health claims, whether they come from Big Pharma or your wellness influencer.

So how do we navigate this landscape where both snake oil salesmen and legitimate institutions can get it wrong?

An Invitation to Read (and Doubt) Differently

This isn't about being a cynic. It's about being a thoughtful observer. About learning to pause at headlines like "drug cuts cancer risk by 50%" and asking, 50% of what? Or wondering whether supplement ad quoting “research” actually links to anything credible.

There's still room for intuition in all this. Your body often knows things before the studies catch up, for example that you sleep better with magnesium, that certain foods make you feel sluggish, that walking after meals helps your digestion. The key is knowing when to trust those signals (low-risk interventions that improve how you feel) versus when to demand harder evidence (treatments for serious conditions, claims about prevention or cure). Cold plunges fall into the first category. Cancer treatments fall squarely into the second.

I'll be honest: I'm still figuring out how to balance this myself. Take sleep supplements, for instance. I've experimented with everything from melatonin to glycine to magnesium glycinate, and while some seem to help, I'm never quite sure if it's the supplement or just better sleep hygiene kicking in. The placebo effect is real, and sometimes I wonder if I'm just really good at convincing myself something works. But that uncertainty doesn't paralyze me, it just makes me more curious about what's actually happening under the hood.

Look, I get it, not everyone has the time or energy to dive into study methodology or cross-reference claims on PubMed. Most of us are just trying to get through the day without snapping at our kids or feeling exhausted by 3 PM. If you're in that camp, here's a decent shortcut: stick to health advice that's been around for decades and isn't trying to sell you anything specific. Exercise regularly (resistance, endurance, mobility, HIIT), eat mostly whole foods, get enough sleep, manage stress, and don't smoke and limit booze. Boring? Yes. Effective? Also yes. Everything else—the supplements, the biohacks, the miracle cures, that's where we should be more careful.

And if you ever fell for something questionable (as I have, many times), don't beat yourself up. It makes you curious, not foolish.

Keep asking. Keep learning.

That's what good science and good health is built on.