In order to smoothly transition to Part II of the general problems from the social sciences (or “social and behavioral sciences”), I thought the following would be à propos (no, I don’t know why I’m trying to sound more French):
“In the 1930s, the British Association for the Advancement of Science installed a number of its members with a most peculiar task: to decide whether or not there was such a thing as measurement in psychology. The commission, consisting of psychologists and physicists (among the latter was Norman Campbell, famous for his philosophical work on measurement), was unable to reach unanimous agreement. However, a majority of its members concluded that measurement in psychology was impossible…Reese (1943, p. 6) summarized the ultimate position of the commission: ‘they [the members of the commission] argue that psychologists must then do one of two things. They must either say that the logical requirements for measurement in physics, as laid down by the logicians and other experts in the field of measurement, do not hold for psychology, and then develop other principles that are logically sound; or they must admit that their attempts at measurement do not meet the criteria and both cease calling these manipulations by the word “measurement” and stop treating the results obtained as if they were the products of true measurement.’” (emphasis added)
Borsboom, D. (2005). Measuring the Mind: Conceptual Issues in Contemporary Psychometrics. Cambridge University Press.
Part I concerned the issues relating to measuring “constructs” like religiosity or intelligence; essentially, first defining some concept as a particular empirical phenomenon characterized by the definition, and then using both the assumed definition AND the assumption of its empirical reality to investigate it. I’m am not the first to raise this issue. In the quotation above, we find that all the way back in the 1930s (when dinosaurs roamed and nobody knew what mp3s or iPhones were), an esteemed scientific organization held several committees to determine the legitimacy of this procedure for psychology. Here’s the kicker: despite the fact that the majority of the commission members (and a large number of scientists & philosophers since) agreed that you can’t measure things that you defined into existence, this process simply became more prevalent. Basically, there was a problem that undermined a large portion of scientific research, but nobody knew how to solve it, so they just continued doing it. Compare that quote with the following:
“In the year 1960, John Tukey published a paper on the so-called contaminated or mixed normal distribution that would have devastating implications for conventional inferential methods based on means. Indeed, any method based on means would, by necessity, suffer from a collection of similar problems. Tukey’s paper had no immediate impact on applied work, however, because it was unclear how to deal with the practical implications of his paper.” (emphasis added)
Wilcox, R. R. (2010). Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy. Springer.
Without getting into detail, there was a widespread assumption about research designs and corresponding statistical analyses across the sciences that turned out to be false. But because nobody knew what to do about it, it was basically ignored. Now, luckily, there are hosts of fixes and alternatives, but guess what? The problematic assumption still exists in textbooks and research; the biggest change is the increasing number of studies and fields that regularly make this unjustified assumption. Introducing Part II: