Second, communication with practitioners is difficult enough without academics rolling out their pet hobby horses at joint academic practitioner conferences. I saw this twice at the information systems assurance conference but I am only going to talk about one because it was done by a senior professor who can take the hit if he is identified by readers.
This academic was asked to discuss a reasonably well done paper but instead of truly discussing the paper he rolled out his hobby horse about the problems with p-value cutoffs being used as absolute cutoffs to determine meaning or not meaningful results (i.e. the magic of p <0.05). Now here we are with an audience of practitioners who are skeptical enough about academic evidence and they now a senior professor tells them that p-values are meaningless.
Now I fully appreciate what he was trying to say, there is no magic in 0.05 or 0.10; that effect size matters; that how much of the variation explained matters: etc. But he said “p-values are meaningless” and backed it up by citing the factoid that the Strategic Management Journal (a top strategy/ob journal) banned the reporting of p-values since 2010!
As an experienced editor I was reasonably certain I would have heard of that. Indeed, what SMJ said was get rid of the “magic” of 0.05 and replace it with effect sizes (which are strongly related to p-values inversely) and/or report EXACT p-values. No * to indicate under 0.05!!! Well that is a huge difference from saying SMJ banned p-values (i.e. they actually banned artificial cutoffs of significance).
So me being me, I publically corrected this faculty member and I found huge sighs of relief from both academics and practitioners in the room once I explained what SMJ actually said. It is tough enough to communicate academic research results without muddingly the waters with relevant academic debates how the number of zeros on the head of a pin!!! Probably cost me another friend but these things are too importatn to let loose!!!!