The not-so-obvious obvious

If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution'”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

From unboiling eggs to the effects of intense kissing, IgNobel Prizes reward good ol’ curiosity

The year’s IgNobel Awards were held on September 17, and rewarded research that defines a kind of excellence that still impacts society without managing the sobriety of character that often bags the more vaunted Nobel Prizes. The 25th edition, held as usual at Harvard University’s Sanders Theatre, and as usual presided over by the magazine Improbable Research‘s editor Marc Abrahams, recognised work done in describing pain, diagnosing appendicitis, the effects of intense kissing and more.

Instituted and first awarded in 1991, the prizes were originally designed to identify work that shouldn’t be reproduced, although that snark has diminished in time. On the flipside, they’re known for juxtaposing meticulously conducted research with the banality of their subjects. For example, the citation for the management prize this year read, “… for discovering that many business leaders developed in childhood a fondness for risk-taking, when they experienced natural disasters that – for them – had no dire personal consequences.” The awarders’ take has been that “The Ig Nobel Prizes honour achievements that make people laugh, and then think. The prizes are intended to celebrate the unusual, honour the imaginative – and spur people’s interest in science, medicine, and technology.”

The 2015 literature prize went to Dutch linguists for discovering that a translation of “huh?” existed in almost every language and for unknown reasons. The biology prize got picked up by a Chilean diad that found “that when you attach a weighted stick to the rear end of a chicken, the chicken then walks in a manner similar to that in which dinosaurs are thought to have walked.”. The physics prize was claimed by scientists who found using the principles of fluid dynamics early last year that many mammals – across species – often took a uniform 21 seconds to take a leak (give or take 13 seconds). The diagnostic medicine prize awardees could actually have hit upon something more useful than you think: diagnosing appendicitis by having patients drive at a fixed speed over a speed-bump. If they experience a sharp pain in certain areas, it’s surgery time. The physiology and entomology prize was co-bagged by Justin Schmidt for developing a relative pain index and Michael Smith for letting himself be stung in 25 parts of his body to find the places most (nostril, upper lip, penis shaft) and least sensitive (skull, middle toe tip, upper arm) to stinging pain. Brave souls all.

The citations also demonstrated how being persistently curious could someday enable you to do things you wouldn’t have thought scientifically (or mathematically) possible. For example, the chemistry prize went to a team from the USA and Australia that figured out how to partially unboil an egg (kudos to Abrahams & co. for being able to go past the paper’s title: “Shear-stress-mediated refolding of proteins from aggregates and inclusion bodies”). The medicine prize may have actually put too fine a point on what everyone probably already knew: kissing does people a world of good, and intense kissing does good intensely. And there’s no point trying to paraphrase the mathematics-prize-winning work: “for trying to use mathematical techniques to determine whether and how Moulay Ismael the Bloodthirsty, the Sharifian Emperor of Morocco, managed, during the years from 1697 through 1727, to father 888 children.”

However, it’s the work winning the 2015 economics prize that doesn’t deserve to be reproduced at all – and it’s probably telling that it didn’t involve scientists but policemen. Specifically, the prize went to Bangkok Metropolitan Police, which offered to bribe its policemen if they didn’t take bribes from others. The BMP needs to be able to take pride in its work’s illustrious company, which includes the 2008 recession, the invention of virtual animal husbandry as well as the find that people would postpone their deaths, indeed, “if that would qualify them for a lower rate on the inheritance tax”.