Archive for March, 2010

Bad Bayes still bad

Tamino, a notorious “climate change” blogger, is alleged to also be a statistician. He certainly seems to know something about time series. (Thanks to this investigation, we know that Tamino is Grant Foster, writer of “blog diatribe”-style climate papers. His affiliation in the linked paper is “Tempo Analytics, Westbrook, Maine”, but I can’t find any other reference to it online).

Unfortunately he might be somewhat off-base when it comes to other statistical principles. His discussion of Bayesian analysis is so confused that I’ll leave it to Andrew Gelman, professor of statistics at Columbia University, to summarise it for us:

Kent Holsinger sends along this statistics discussion from a climate scientist. I don’t really feel like going into the details on this one, except to note that this appears to be a discussion between two physicists about statistics. The blog in question appears to be pretty influential, with about 70 comments on most of its entries. When it comes to blogging, I suppose it’s good to have strong opinions even (especially?) when you don’t know what you’re talking about.

Update: Gelman repeated himself on his academic blog, where he elaborates on his opinion in the comments. It’s strange that when I tried commenting (twice) on “Tamino”‘s blog to refer him to Gelman’s comments, I didn’t succeed; but when someone else did the same but with the qualifier that “[Gelman] comes around to Tamino’s side” [which not actually true] in his later comments the link appears.

At the time of writing the comment thread ends with “Tamino” abusing a commenter trying to correct one of his calculations until he eventually admits he was indeed wrong. Oh dear.

Leave a Comment

Another sampling from the great frequentist malpractice genre in the sky

That this isn’t well-known amongst the general public is a disgrace, but the “scientific method” as carried out by academic careerists has long been only a poor substitute for real science:

It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.

From sciencenews.org. Then follows the usual errors relating to interpretation of hypothesis tests and other applied frequentist gunk. There is an interesting point made about how randomisation isn’t all that (although what the alternative should be is anyone’s guess), before… behold!

Such sad statistical situations suggest that the marriage of science and math may be desperately in need of counseling. Perhaps it could be provided by the Rev. Thomas Bayes.

A lovely line. Whether this latest example of the litany against the standard operating procedure of too many scientists from all disciplines will change anything more than the previous attempts to do so is moot.

Leave a Comment