- “Recent studies have proven …”
- “In a clinical trial, more than 80% of those who …”
- “According to research …”
These are the magic words. They make us believe whatever the speaker is saying. Not only believing - we often forward the information at dinner to our friends.
Our intention is good. The scientific method is the best process we have to determine what is true and what is not. So if we hear about clinical results, we believe. And by spreading that knowledge, we are champions of light, fighting back against the darkness that is ignorance.
The more I learn about science the less I am comforted by the words “a recent study.” Especially in the field of psychology.
First example: Do you remember the TED Talk on power poses? It was huge back in the day. By standing with confidence, you gained a testosterone boost, an +4 charisma bonus, and a decline in cortisol. According to studies.
I wish power posing was an outlier. That it was a fluke. That most studies, when checked, did not become wrecked.
Unfortunately, power posing is not an outlier. Psychology has a huge problem with replicating results. What is proven in one study can’t be found in the next.
Second example: I’ve written about meditation and mindfulness in this context. There’s a lot of shitty research out there on how these practices can make anything in life better. These studies rarely have a clear definition of the technique, no placebo controls, and can’t be replicated. And yet every week HBR posts a new study on how mindfulness improves everything from leadership to psoriasis.
But my main concern is not power posing nor mindfulness. It is rather that this is a widespread problem. And my main takeaway is this:
We should no longer be satisfied with “recent studies.” It’s time to step up our critical thinking - even on research itself. When someone comes dragging with a recent study, you should ask questions about how it was done.
How many people were in the study? How big was the difference? Has another team at another university been able to replicate it?
To be clear, if you don’t get a satisfactory answer on these question, that doesn’t mean the study is worthless. We can note the information, but we don’t have to take it too seriously. Primarily, we do not treat science as something binary.
This does not mean that whatever garbage Joe Rogan spews with a crazy unemployed archaeologist is as true as what comes out of academia. It doesn’t mean that everything is true and nothing is true. Having one study with a small cohort with no placebo controls that haven’t been replicated is still better than no study.
It does mean that we are now beyond the age when you can merge a dozen headlines from Bloomberg Newsweek into a book. Or when you can stand with confidence (no meta-reference intended) on a stage and say you have the keys to Success and Happiness, with nothing more than a few shoddy papers to back you up.
Speakers and writers: a little more paranoia will serve us well. Did we read the journal or just the headline?
Listeners: a little more skepticism will get you closer to the truth. (It might also make you seem annoying and boring. (Speaking from experience.))
I am very much writing this to myself. I have been too sloppy in my own quest for a solution tech’s increasing problem with mental illness. I will be more humble and more careful when presenting my findings. I’ll try to be more diligent when I search for research.
The good news?
If we all follow this path, there will be less bullshit to sift through. Less noise. More effective and honest information that we, in turn, can use to build more knowledge on.
And that, my friends, is a quest worth going on.