Does a treatment really work? A brief introduction to medical statistics

Assessing actual patient outcomes is difficult. Coming up with data is time consuming, expensive, and full of potential pit-falls. And then using medical statistics to understand those figures is even more tricky. When a new study is published, it usually compares the new treatment or medication to the previous best method, and thus the gains are often very modest. And then to complicate things even further, it often quotes the results as an ‘odds ratio’. This is a concept that is often poorly understood.

Odds Ratio

Whole articles, and even websites exist to try to explain odds ratios. I am (foolishly) going to attempt it too.
 
An odds ratio is a number.
·      Odds ratio <1 - assessed intervention is better than control
·      Odds ratio =1 – no difference between intervention and control
·      Odds ratio >1 – assessed intervention is worse than control
 
In a medical study we use odds ratios to compare outcomes of two groups of patients. Our control group is always the denominator, and our intervention group is the numerator.
 
Odds ratio = (Intervention) / (Control)
 
So, say we have a new treatment. We give it to group X. Group Y gets the previous best treatment. Then we follow them all up, and compare all cause mortality.
Lets say the groups had 100 patients each in them.
14 patients die in group X. The odds for group X are 14/(100-14), or 14/86, or 0.16.
20 patients die in group Y. The odds for group X are 20/(100-20), or 20/80, or 0.25
 
Our odds ratio is therefore 0.16/0.25 = 0.64.
This is a good odds ratio. Our treatment works!
 
If, say, we have an odds ratio of 0.75, this means that our new method is 25% better than the control. Or, to put it another way, that if you are a patient receiving the new treatment, you have got a 25% chance of a better outcome.
 
But it doesn’t end there. There is also the ‘confidence interval’ and the ‘p value’.
 

Confidence Interval

This is a measure of how certain we are that our OR is accurate. Some complex maths goes into calculating this. Most medical studies cite a ‘95% confidence interval’. The confidence interval usually includes a range around the level of the OR. A confidence interval that includes ‘1’ in its range, means that there is likely no difference between the two groups of patients. A confidence interval whose entire range is <1 means there likely is a difference between the two groups.
 
An example in our case might be something like:
OR = 0.64, 95% CI 0.48 – 0.72.
 
This means that our patients are likely to have between a 52% and 38% chance of a better outcome.
 

P values

A p value assesses whether the difference is statistically significant. A p value of <0.05 implies a statistically significant difference.
A p value of >0.05 implies there was no statistically significant difference. This doesn’t necessarily mean that your experiment or research has been a waste of time, just that it might need a larger sample size to allow for calculating whether or not it is statistically significant.
 

Do I need to understand all this?

Um, well, probably a little bit. If you are going to do some research, then definitely. And most doctors at some point in their career will do research. Even if you don’t, then you probably need to be interpret these numbers to make decisions on your treatment when you read journal articles.