The post immediately before this one isn’t really a blog post. I was doing some research for a project and happened upon that paper. Rarely have I seen so many failures and problems in so few lines of some peer-reviewed paper. So I spent some time writing a response, initially intending to cover all of the issues, and then realized 1) I was taking to the time to write a “journal-like” comment paper I hadn’t even thought to publish and 2) nobody was going to be reading what I was writing. Although cathartic, I felt I needed additional justification for wasting the time I did, so I made a few simplifications and other edits, wrote up a reference section, and posted it here.
I think the last straw for me was the comment that “Einstein would have been pleased” by a null-hypothesis testing approach to whether the velocity of light was constant. Apart from everything else, we’re talking about a guy who fought with the reliance upon statistics in modern physics almost his whole life. After practically founding quantum mechanics, he spent most of his career trying to show that it was at best an approximate statistical mechanics and should not be regarded as physics, or at least only as “physics” in the sense that statistical mechanics was, which you’ll recall was initially treated with such hostility that its founder, Boltzmann, killed himself (ok, that’s probably not true, and the critical reaction to Boltzmann is too often exaggerated). I find the continued use of NHST despite the enormous and unanswered body of critical studies problematic enough without a defense founded on how NHST could have led to the results that better methods actually did. If the absurdity of the claim about the speed of light weren’t enough to send me in to a tirade of pointless writing, the remark about Einstein definitely clinched it. Really? The guy who fought tooth and nail against his own brainchild because it (quantum mechanics) described properties of systems in terms of probabilities, who is famous for his quip “God does not play dice”, would appreciate the introduction into physics of a pointlessly convoluted, inferior research method that is most frequently used to make probabilistic statements about things that we define into existence (and therefore lack even the ontological status of systems in quantum & particle physics)? Right. Because it’s entirely likely that Einstein or any physicist of the time would think “you know, instead of just measuring the speed of light to determine if it is constant or whether we can detect this postulated (a)ether stuff, let’s refer to such hypothesis as either a “null hypothesis” or an “alternative” (artificially ensuring there are only two possibilities), assume the null to be true, and then calculate the probability that we’d get the data we do under that assumption? After all, it’s not like we’ve gotten anywhere by just testing whether some hypothesis is likely to be true without arbitrarily and pointlessly defining it in terms of a single contrary hypothesis so that we can say things like ‘there was a statistically significant difference between the light treated as waves versus light treated as quanta’ or ‘the effect of gravitation on moving bodies is statistically significant.’ If we don’t start assuming the truth of some null hypothesis we usually aren’t interested in so that we can provide flawed arguments for rejecting it and worse for explicitly or implicitly accepting an arbitrarily and inaccurately singular alternative hypothesis, how are we going to be taken seriously as scientists?”