Coincidental outcome or meaningful result? That’s the basic question that is asked when we contemplate statistical significance. In statistics the p-value helps identify how much randomness contributes to a given result. A p-value of 0.05 means there is a 5% chance that experimental or study results are due to chance. Typically, statistical findings having a p-value of .05 or less are considered statistically significant.
Statistically significant results should not be confused with the more broader meaning of significant. We have ‘significant’ others and ‘significant’ life events. The more general use of ‘significant’ lends itself to large, impactful things. That’s not necessarily what statistically significant results indicate.
An experimental treatment for cancer might show a statistically significant increase in survival rates from 10% to 11%. Statistically, it’s a significant result, but in a broader sense how truly significant is that treatment? Would it be a treatment regularly used to treat cancer? Or would current, less expensive treatments be just has effective?
There’s some debate about eliminating p-values all together and focusing instead on both the size and range of the experimental outcome or effect. If a experimental treatment works on 60% of patients and can be used to treat a number of different cancer types, those results would be both statistically significant and more broadly significant.
Keep in mind too that if you run a study 100 times and choose to analyze the study data 100 different ways, 5% of those results will be statistically significant purely by chance. And what if those chance results are the ones published and the ones receiving the most media attention?
P-value certainly has value but be careful to understand the limitations of statistically significant p-values in study results and how those statistically significant results might be interpreted by the broader public.