There’s something interesting happening in the staid world of peer review these days, and a recent set of posts on another site renewed my hope that it could be a positive trend. Raphaël Lévy at the University of Liverpool starts off the first post this way:
Challenging published results is an onerous but necessary task. Today, our article entitled Stripy Nanoparticles Revisited has been published in Small, three years after its initial submission to this journal (3/12/09) and about three and a half years after the first submission (to Nature Materials, 21/07/09).
As its title indicates, the article challenges the evidence for the existence and properties of “stripy” nanoparticles.
That, and a followup post from another nanoparticle researcher, are interesting reading just for the underlying science. Both researchers do a good job describing the techniques for creating and studying these very tiny structures, which could be useful for all sorts of cool engineering tricks. One intriguing characteristic of these nanoparticles is that they can apparently cause certain types of molecules to self-organize into precise patterns of stripes on their surfaces. At least, that was what earlier work had shown. Lévy, however, thinks this stripy pattern is an artifact, the microscopic version of an optical illusion.
On its surface, this looks like a very small controversy in every sense of the word. Who cares whether a few minuscule spheres mixed with some odd chemicals are striped or smooth or bumpy or dancing the hokey-pokey? I’m certainly not qualified to take a stance on the issue, and it’s hard to see any immediate relevance to my life, but seemingly arcane disputes like this drive much of the scientific enterprise. Get the details wrong on stripy nanoparticles, and maybe we can’t build the next generation of computers. We won’t know why it matters until suddenly it does.
The classic quality control system for scientific facts is peer review, in which scientists submit their work to journals, which distribute the paper to anonymous colleagues of the author for independent analysis. If the author’s professional competitors agree that the new findings are significant and probably correct, then the journal will publish the paper. Peer review always has been a deeply flawed system, plagued by academic politics, errors, and inefficiency, but until recently nobody could come up with a better one.
The World Wide Web changed that by design. It’s easy to forget that the whole online ecosystem we now take for granted originated from a scientist’s frustration with paper-based publishing. YouTube, Amazon, Facebook, and their ilk are merely side-effects of a platform built expressly for reporting science. Now, decades later, researchers are finally starting to appreciate the full depth of what the Web can do for peer review. Lévy and his colleagues are part of that trend. Ironically, they still have to combat the perception that the internet is somehow an inappropriate venue for this. A system built for science is now synonymous with shopping, ranting, and porn in many people’s minds.
What I find most interesting about the stripy nanoparticle conversation is that it has the generally civil tone, moderate pace, and narrow scope of classical peer review. There are lots of other examples of this type of “post-publication review” online, but the ones that draw attention are usually as much about public relations as they are about data. Reading the comments on Lévy’s blog, I’m mostly struck by how thoughtful and well-informed they (mostly) are, and how many of the participants genuinely seem to care more about getting the right answer than winning. That’s how it’s supposed to be done. That’s science.