## Non-Math Academics Tend to Be Impressed by Equations

What’s the value of an equation? This study appears to show that, to academics who don’t use much mathematics, any equation can be impressive — no matter what the equation says, and whether or not it adds any clarity or knowledge to a situation:

“The Nonsense Math Effect,” Kimmo Eriksson,

Judgement and Decision-Making, Vol. 7, No. 6, November 2012, pp. 746–749. (Thanks to investigator Mark Dionne for bringing this to our attention.) The author, at Mälardalen University in Västerås, Sweden, explains:“Although potentially applicable in every discipline, the amount of training in mathematics that students typically receive varies greatly between different disciplines. In those disciplines where most researchers do not master mathematics, the use of mathematics may be held in too much awe. To demonstrate this I conducted an online experiment with 200 participants, all of which had experience of reading research reports and a postgraduate degree (in any subject). Participants were presented with the abstracts from two published papers (one in evolutionary anthropology and one in sociology). Based on these abstracts, participants were asked to judge the quality of the research. Either one or the other of the two abstracts was manipulated through the inclusion of an extra sentence taken from a completely unrelated paper and presenting an equation that made no sense in the context. The abstract that included the meaningless mathematics tended to be judged of higher quality. However, this ‘nonsense math effect’ was not found among participants with degrees in mathematics, science, technology or medicine.”

Here is some detail from the study:

BONUS: An essay about this, by Kevin Drum.

BONUS: Professor Eriksson mentions, on his web site, a soon-to-be-published paper on a very different topic that seems likely to draw some attention:

Kimmo Eriksson (in press). Autism-spectrum traits predict humour styles in the general population. To appear in

Humor: International Journal of Humor Research.

December 28th, 2012 at 10:57 am

Ironically, the abstract commits some of the most basic classical nonsense-math errors.

The vertical axis begins at 45%, grossly distorting the magnitude of the difference in heights of the data bars for each category. (The left-most bar is barely above the bottom of the graph, while the one next to it reaches more than halfway to the top – but the actual difference in magnitude is only about 15%.) Using shortened axes is a classic technique for distorting the visual impact of graphs.

Also, the abstract states that “this ‘nonsense math effect’ was not found among participants with degrees in science, mathematics, technology or medicine”. In fact, a close look at the distorted graph clearly shows that almost half the respondents in “Math, sci., tech.” gave a better evaluation to the nonsense-math abstract, while in the category “Medic.” almost 65% did – the second highest rate among all groups. The graph itself directly contradicts the interpretation given in the text of the abstract.

If the authors of this study were better at avoiding nonsense mathematics, their graph, and their interpretation of their own data, might have been stronger.

December 29th, 2012 at 9:58 pm

You are badly wrong on every point. Go and read the study itself.

First:

The medical group had a relatively small and NON-STATISTICALLY-SIGNIFICANT

tendency to rate the abstracts with mathematics in them higher. (It was a

mean effect of +3 on a scale from 0 to 100.)

Since the difference for the medical group is not statistically significant, it cannot be claimed as a result in the paper.

The effect was /consistent/ but it was not /large/. Maybe the authors wish they had been looking at the consistency of the effect rather than the size, but you can’t change your criteria after you’ve collected the data.

I can open it from here, so I think it’s free, but just in case it’s not, here is Table 1 from the paper:

Table 1. Mean (SD) rating advantage of added math.

Math, science, technology (N=69): −1.3 (19.2)

Medicine (N=16): 3.0 (16.0)

Humanities, social science (N=84): 6.6** (21.2)

Other, e.g., education (N=31): 13.9** (23.3)

Total (N=200): 4.7** (21.0)

*: p < .05; **: p < .01

The second two groups are statistically significant at the .01 level. The first two aren't significant at the .05 level.

Second, your assertion that the readers should have noticed the "startlingly contextless" mathematical sentence is wrong.

The manipulated version would have been, for example:

"The present study adopts an experimental audit approach—in which matched pairs of individuals applied for real entry-level jobs—to formally test the degree to which a criminal record affects subsequent employment opportunities. [blah blah] A mathematical model (T_{PP} = T0 – f T0 d_f^2 – f T_P d_f ) is developed to describe sequential effects."

The last sentence is what was added. It is not obviously gibberish. It is unconnected to anything else, but an unconnected sentence in the abstract does not necessarily mean that the paper is nonsense.

The question this study tries to answer is: if you talk about a "mathematical model," and put a semi-plausible but unexplained equation in the abstract, will that impress people?

An unexplained sentence in the abstract probably will not hurt. For all a mathematician knows, everyone in criminology already knows what T_{PP} is. So using "no effect" as a baseline is entirely reasonable.

Finally: is it kosher to cut off the top and bottom of the graph?

Yes. Sure, a graph with the top and bottom cut off will mislead a lot of people if you print it in a newspaper (or here). But not in a scientific journal. The scientific reader has statistical training and can see that the axis only goes to 75%.

(The scientific reader also realizes that the effect that we're talking about is actually shown in the difference from 50%, so it's more correct if you do mentally draw a line at 50%.)

It /would/ be nice if there were error bars on Figure 1. (I'd imagine that they would be pretty large.) And it would be even better if Improbable Research had published Table 1 instead of the graph, but that is not an objection to the study.

December 29th, 2012 at 10:17 pm

Summary:

There is an apparent inconsistency. The actual conclusion of

the study says that the nonsense math effect wasn’t found in people with

“degrees in mathematics, science, technology or medicine.” But the bar

for “medicine” seems pretty high! How can that be?

The

guy I’m replying to is saying, “Well, this paper is clearly inconsistent

and terrible.” No. In the paper, the effect for people who reported

medical degrees was small (+3%) and not statistically significant.

On

the other hand (and this is the apparent paradox) the bar graph, Figure 1, shows that

they consistently gave /slightly/ larger scores to the abstracts with

nonsense mathematics in them!

But that graph doesn’t have any error bars or anything, so it might be better to ignore it.

April 14th, 2013 at 2:42 pm

What you are saying is only partially true because there is a line all of the way across at the 50% level which is difficult to miss.

December 29th, 2012 at 6:00 am

Comments by Kevin T. Keith remind me of the fundamental question: What makes topics introduced by IG Research different from those by other media?

IG Research is unique because of its spirit: it first makes people laugh and then think. The latter part largely is dependent upon how well in general we (including readers/editors/communicators of IG Research) are trained in science. Recent topics posted here (particularly those in biomedical fields) might still make people laugh but do not necessarily encourage people to think. They often try to impress people by displaying one’s eccentricity deliberately in a strange manner.

December 30th, 2012 at 1:21 am

[…] Visto en Improbable Research […]

April 14th, 2013 at 1:27 pm

If they don’t understand it then it must be good.