Yeah, there are pros and cons on both sides of this. From those blog posts it seems that the recommendations of the ASA 2016 statement are most reasonable (and I think they are emphasized in the webinar I link to):
-
P -values can indicate how incompatible the data are with a specified statistical model.
-
P -values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
- Scientific conclusions and business or policy decisions should not be based only on whether a p -value passes a specific threshold.
- Proper inference requires full reporting and transparency. P -values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p -values (typically those passing a significance threshold) renders the reported p -values essentially uninterpretable.
- A p -value, or statistical significance, does not measure the size of an effect or the importance of a result.
- By itself, a p -value does not provide a good measure of evidence regarding a model or hypothesis.
I also recall reading (or hearing) that the p-value was currently the only thing holding back a floodgate of irreproducible findings and I agree that the 2nd ASA statement calling for the removal of the p-value goes to far.
Maybe the key point is that while frequentist statistics are mathematically sound, most people don’t really understand what significant results imply. I think point 2 above is a common problem for biologists - the assumption is that p<0.05 means there is a >95% chance of the two sampling populations being different, while it actually means there is a <5% chance of the observed difference occurring if the populations are the same.