Lies, Damned Lies, and Research: Systematic misuse of statistics in the sciences

Mathematics is the language of the sciences. 1 It is a tool essential to research across all fields and is intricately bound to modern physics that in some fields (e.g., quantum mechanics) it is difficult to separate the mathematical framework and theoretical framework are. So why don’t we teach it?

“That’s ridiculous! Everybody who goes to school learns math, you [insert insulting epithet here]”. I know, but because I intend to write a few posts on how we fail to teach it as we should, I opted for brevity. Before I can do that, I need to write a little bit on how important math is in research and why failures to teach it adequately render thousands of research papers across fields utterly or largely useless. Rather than some exhaustive account, here I’ll just show how many fields rely on statistical tests that are nearly 80 years outdated.

It may be hard to believe, but researchers in fields as diverse as business & managerial sciences and the medical sciences use the same standard statistical tests. Here, I’ll prove it: The following is from a paper by two experienced researchers (one with both an MD and PhD) in a volume on pain research methods

    Standard tests such as analysis of variance (ANOVA), Scheffé’s F test, and serial t-tests…should be used to ensure statistical significance (p < 0.05).2

What these “standard tests” are is far less important than that they are “standard”. But standard for whom? It’s true the authors probably mean “standard” in the fields the target audience work in:

    “target readers include beginners in pain research who may have substantial training and experiences in other fields; and pain researchers who may have extensive knowledge and experience in a specific field, but who may want to extend their research to a new level” (p. v).

Clearly, business researchers don’t qualify. Yet looking at a textbook on business research methods3 we find ANOVA, t-tests, & F-test all on one page in a chapter summary of “Key Concepts” (p. 548). So despite the radical differences between business research and labeling nociceptive neurons, the same statistical tests are standard. I hope you’ll now take on faith that these same tests are everywhere in the sciences (they were developed for use in physics, chemistry, and biology).

So what would you think if I told you that there are superior alternatives to all of these standard tests? You would probably (and justifiably) trust the thousands of research papers that use these tests over some blogger, so I’ll let someone else tell you:

    Many of the statistical methods routinely used in contemporary research are based on a compromise with the ideal… The compromise is represented by most statistical tests in common use, such as the t and F tests…4

The most widely used statistical tests (including the t and F statistics) were developed by Pearson, Galton, Edgeworth, Gosset, & Fisher in the late 19th and early 20th centuries. Of course, being old isn’t the issue. The issue is that the “ideal…represented by permutation tests, such as Fisher’s exact test” 5 includes test that have been around since the ‘30s. Why did Fisher and others not use the ideal methods they developed? They couldn’t. The amount of calculations needed would require some kind of computing machine. So until we have these computing machines, or “computers” for short, researchers will have to settle for the compromise. Alternatively, considering we’ve HAD computers for some 60+ years, we could change math education. It’s a radical idea, I know, but just maybe mathematics education could be changed to reflect the “recent” development of tests from the ‘30s that we can use thanks to brand new and improved computers from the ‘70s.

If you disagree, try to think about any popular science sources you’ve read or heard recently (or can remember). Unless it was on particle physics or one of the other few exceptions, chances are it concerned research that depends upon statistics known to be flawed for the past century, but not by the vast majority of those who use them. To put that another way, most of what you come across on scientific research is based on the use of bad methods because the insufficient exposure to mathematics most researchers receive consisted of a minimal but systematic review of inferior statistics.

1For many it is also a science or even THE science. For example, in his 1997 monograph Mathematics as a Science of Patterns, Resnik calls mathematics the “queen of the sciences” but does not cite Gauss (whence comes the quote, albeit in the form “Die Mathematik ist die Königin der Wissenschaften…”). Apparently, it is no longer a quote but a proverb or aphorism and hence requires no citation.

2Jasmin, L., & Ohara, P. T. (2004). Anatomical identification of neurons responsive to nociceptive stimuli. In Z. D. Luo (Ed.) Pain Research (pp. 167-188). Humana Press

3Zikmund, W., Babin, B., Carr, J., & Griffin, M. (2009). Business research methods(8th Ed.). Cengage Learning.

4Mielke, P. W., & Berry, K. J. (2007). Permutation methods: a distance function approach (2nd Ed.). Springer.

5Ibid.

Advertisements
This entry was posted in The Language of Science: Why most scientists don't speak it. Bookmark the permalink.

One Response to Lies, Damned Lies, and Research: Systematic misuse of statistics in the sciences

  1. Pingback: Lies, Damned Lies, and Research: Systematic misuse of statistics in the sciences | Research Reviews | High Noon GMT

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s