This is a newsletter. It’s about health related topics. I'm a psychiatrist, when I am not writing this! Todays piece is educational content to help my readers save time when looking at science. People ask me to look at research studies all the time:
“Have you seen that study?”
Yes, I've seen that study. How do I get through all the research? I use some simple tricks to save a lot of time. I'm going to share some of them with you today!
Science articles have a predictable format. The first thing I do is…
NEVER READ THE WHOLE PAPER.
Original Research has the following format:
Abstract: Read this.
Introduction: Do not read this.
Methods: Read this carefully! If there are no red flags…
Results: Read this!
Discussion: Don’t read this.
Conclusion: Don’t read this.
I have just reduced the amount you need to read by a lot.
Now, since you have no reason to trust me, I will have to waste both our time justifying my advice above. So be it!
Abstract: This is very short. It will summarize what the rest is about.
Introduction: Do not read this. This is fluffy background. If you want to learn to write papers, you will need to read some of these. But no science is happening in the introduction.
Methods: Read this carefully! It helps us understand what the article might be able to demonstrate. I promise you; it is the only mandatory section1. I will help you with tips that require no knowledge of the subject matter. They are all about formatting and spot-checking for bs. There is a lot of bs in Science. I promise you— not finishing an article that is bullshit has zero penalty. Not finishing a groundbreaking article— erroneously—also has zero penalty! Others will replicate it, and you will see it again!
You Results: Read this! If the methods pass the “sniff test” for bullshit, the findings are in this section.
Discussion: Don’t read this. This is SPIN. This is the authors explaining what they want you to think. You don’t need their help. Are the methods real? Are the results significant? Are they significant…and big enough to care about? YOU WILL ALREADY KNOW THIS by the time you get here. You don’t need their bias, and it’s not worth your time.
Conclusion: Don’t read this; they said it above.
More time savers!
A paper must first establish the clinical trial was a study of causality. This necessitates successful randomization. This means every reputable research study will have the same table one.
Table one is our double check on if the randomization was successful.2
Table one should include baseline demographics like this:
Table one has this format…
I don't know even need to know the field of medicine! I’m only asking a “there/not their question” to start.
TFPs (Patent Pending) Fast Lane.
No table one? Ignore.
But what if I know nothing about this science and it looks interesting?
I know nothing about that modality. But I am going to be ok, cause I need to look at Table one first:
Why are they showing me data about the sham and active intervention in “table one?” That is not what goes in table one. I get it; there is a very significant P value! But it doesn’t go in table one.
Ignore this study.
But it might be groundbreaking!? Yes. And then it will be replicated by scientists who know how to write a paper.
I can’t answer the question, “Can XYZ intervention cause this difference”— because I don’t know if the randomization was successful. I’m sure very prestigious journals won't waste my time this way…
Moving on to table two…the following is just my favorite quote from a journal in a while, forgive me:
To prevent the audience from getting bored while reading a scientific article, some data should be expressed in a visual format in graphics and figures rather than crowded numerical values in the text. Peer reviewers frequently look at tables and figures. High-quality tables and figures increase the chance of acceptance of the manuscript for publication.
Yes, peer reviewers do look at those tables! We do, we do! They continue to point out some important details:
Most of the readers priorly prefer to look at figures and graphs rather than reading lots of pages.
I feel so seen!
Table one is supposed to include baseline stuff. This includes demographics and clinical characteristics. The two groups must be the same at baseline—no significant differences. If there are significant differences, they were not successfully randomized! We can stop reading right there! Because, and this is important—
IF THE TWO GROUPS ARE DIFFERENT AT THE START AND DIFFERENT AT THE END, HOW CAN WE KNOW THE INTERVENTION DID ANYTHING?
We can not. I’m serious. Stop reading—next paper. No information about causality can be inferred.
Here is my favorite classic example, as Published in the American Journal of Psychiatry. The headline for the study was…positive.
If this newsletter has a slogan it might be “helps do what and for whom?” My argument is based on needing to know nothing about science.
Here is the title of the paper:
We read the abstract. Nothing special. What do the figures look like?:
Owen, that graph is from results! Methods first…In table one, they presented demographics:
I am annotate my issue. We can basically stop reading the paper now. But you won't. This will prove my point about wastes of time:
Table two looks fishy. First off, it’s more baseline date. That goes in table one! This is table one.2, not table two! And hold on…
The intervention group was sicker at baseline. regression to the mean is a real statistical phenomenon—they will get “less sick” by chance alone.
The difference shown in the results:
Nope. It’s what happens when your randomized controlled trial about an intervention has the veracity of a Harlem Globetrotters Basket Ball Game Referee—the outcome was not because they won the game…it was a setup.
Now, we parsed that thing way more than we needed to. We should have stopped at “table one and two are actually table one and that ain’t right!”
This is the kind of rapid assessment and analysis that lets anyone—expert in the science or not—quickly dismiss papers that are unlikely to have useful conclusions.
This paper is of course a classic cautionary tale. It was such a bad study that it got a whole write up in the times!
Here is how absurd the behavior that led to the data was…
Two of the study participants were living in a residential treatment facility for sex offenders and may have lied about their diagnoses to qualify for the trial. One of those men slipped the drugs to unwitting treatment center residents and staff, an alarming development that nevertheless did not seem to ruffle the university oversight board that is charged with looking into such episodes.
And even more horrifyingly absurd:
Spiked Oatmeal
One morning in May 2010, residents and staff members at Alpha Human Services, a residential treatment facility for sex offenders in Minneapolis, sat down to breakfast and noticed something strange about their oatmeal. It was pink.
I think we can all agree that drugs given in the oatmeal of non-study participants is unlikely to be effective in the treatment of anything other than “disorders of oatmeal flavor by proxy”?
That is nightmare behavior in a research study. However, you did not need to know any of it.
We could have avoided wasting time on that paper! Just use the TFPsFastLane approach.
Thanks for reading! Share with your friends! I can also let you know that subscribing to this newsletter is the cool thing to do among people who are attractive, influential, and generally live enviable lives. You could become a paid subscriber too. We are all getting together later just to talk about how much we love living our dreams. It’s like Andrew Tate’s war room, absent the Misogyny, War, Rooms, Tate(s), and serious charges of criminal behavior.
If you are not financially in a place to subscribe but want to support my work, the following methods are meaningful:
You can review my podcast!
And review my books on Amazon!
I even have music your can follow along with and I will appreciate the support.
And of course…
a lot of the time
The reason to randomize is to determine causality; if the randomization failed or wasn't reported, we're not looking at a randomized controlled trial. Thus we can make no inferences about causality.
You missed the single most important factor. Zip down through the paper, and look at the financial conflicts of interest. Psychiatry research is massively influenced by fCOI's, and any paper with them is completely suspect.
I enjoyed this, and I’m archiving it in case I ever find myself reading one of these papers. Also-- I envy you, because if I ever wrote anything like this about one of my fields (songwriting, music teaching, etc.), it would be a miracle if I ever worked again.