Tag Archives: science publishing

Poo and Shit, Revisited

Correspondent Bob has provided an interesting update on some research I did on relative publication rates back in 2010:

Just as a follow up to your Poo vs Shit analysis, a PubMed search today (11 April 2013) reveals that Shit is on the rise with 23 articles now with A. having first authorship on 7 and second on 4. That’s a 3.3-fold increase overall compared to Poo’s modest 16.9% increase from 359 to 420. As an additional modicum of irony, A. [Shit]’s articles are mostly related to BROWNian particle motions.

Shit’s fans will no doubt be pleased with these new hits.

Do These Stripes Make My Nanoparticles Look Weird?

There’s something interesting happening in the staid world of peer review these days, and a recent set of posts on another site renewed my hope that it could be a positive trend. Raphaël Lévy at the University of Liverpool starts off the first post this way:

Challenging published results is an onerous but necessary task. Today, our article entitled Stripy Nanoparticles Revisited has been published in Small, three years after its initial submission to this journal (3/12/09) and about three and a half years after the first submission (to Nature Materials, 21/07/09).

As its title indicates, the article challenges the evidence for the existence and properties of “stripy” nanoparticles.

That, and a followup post from another nanoparticle researcher, are interesting reading just for the underlying science. Both researchers do a good job describing the techniques for creating and studying these very tiny structures, which could be useful for all sorts of cool engineering tricks. One intriguing characteristic of these nanoparticles is that they can apparently cause certain types of molecules to self-organize into precise patterns of stripes on their surfaces. At least, that was what earlier work had shown. Lévy, however, thinks this stripy pattern is an artifact, the microscopic version of an optical illusion.

On its surface, this looks like a very small controversy in every sense of the word. Who cares whether a few minuscule spheres mixed with some odd chemicals are striped or smooth or bumpy or dancing the hokey-pokey? I’m certainly not qualified to take a stance on the issue, and it’s hard to see any immediate relevance to my life, but seemingly arcane disputes like this drive much of the scientific enterprise. Get the details wrong on stripy nanoparticles, and maybe we can’t build the next generation of computers. We won’t know why it matters until suddenly it does.

The classic quality control system for scientific facts is peer review, in which scientists submit their work to journals, which distribute the paper to anonymous colleagues of the author for independent analysis. If the author’s professional competitors agree that the new findings are significant and probably correct, then the journal will publish the paper. Peer review always has been a deeply flawed system, plagued by academic politics, errors, and inefficiency, but until recently nobody could come up with a better one.

The World Wide Web changed that by design. It’s easy to forget that the whole online ecosystem we now take for granted originated from a scientist’s frustration with paper-based publishing. YouTube, Amazon, Facebook, and their ilk are merely side-effects of a platform built expressly for reporting science. Now, decades later, researchers are finally starting to appreciate the full depth of what the Web can do for peer review. Lévy and his colleagues are part of that trend. Ironically, they still have to combat the perception that the internet is somehow an inappropriate venue for this. A system built for science is now synonymous with shopping, ranting, and porn in many people’s minds.

What I find most interesting about the stripy nanoparticle conversation is that it has the generally civil tone, moderate pace, and narrow scope of classical peer review. There are lots of other examples of this type of “post-publication review” online, but the ones that draw attention are usually as much about public relations as they are about data. Reading the comments on Lévy’s blog, I’m mostly struck by how thoughtful and well-informed they (mostly) are, and how many of the participants genuinely seem to care more about getting the right answer than winning. That’s how it’s supposed to be done. That’s science.

Decoding ENCODE

Today, a scientific collaboration called the Encyclopedia of DNA Elements (ENCODE) published some of its data. When I say “collaboration,” I mean more than 400 scientists working in 32 different labs, and when I say “some of its data,” I mean over 1,600 experiments involving 24 types of analyses on 147 cultured cell lines.

A typical ENCODE experimental result.

A typical ENCODE experimental result. Links to the original figure.

ENCODE didn’t publish this massive data set in a paper. They published it in 30 papers that came out simultaneously in three journals, plus additional commentary elsewhere. To help people make sense of this information glut, Nature, the main publisher, set up a special web page where all of the papers are freely available, released an iPad app that lets users explore the results through different “threads” of inquiry, and held a press conference that featured several of the consortium’s principal researchers as well as an interpretive dance performance inspired by the results. Yes, really.

As regular readers know, I’m always ready to call out publishers who engage in excessive hype. In this case, though, I think Nature‘s hoopla is entirely appropriate. This is a $185 million project that’s trying to figure out how humans work at a molecular level, and the current batch of publications presents both a rough sketch of an answer and a whole new list of big questions.

Most science news stories on ENCODE will probably begin and end with an observation about “junk DNA,” and how the new data apparently overturn the notion that most of the human genome is just taking up space. Perhaps acknowledging that this is the most easily-digested result, the press materials and many of the commentary articles highlight it. Molecular biologist Joseph Ecker puts it this way in his synopsis:

One of the more remarkable findings … is that 80% of the genome contains elements linked to biochemical functions, dispatching the widely held view that the human genome is mostly ‘junk DNA.’ The authors report that the space between genes is filled with enhancers (regulatory DNA elements), promoters (the sites at which DNA’s transcription into RNA is initiated) and numerous previously overlooked regions that encode RNA transcripts that are not translated into proteins but might have regulatory roles.

But neither that result nor any other individual piece of the data is really the main point. What matters about ENCODE is the totality of it, and what the scale of the data says about the future of biology.

When the Human Genome Project released its draft sequence 11 years ago, it was a bit like Deep Thought reporting that the answer was in fact 42. By itself, the genome sequence told us that we only had about 20,000 genes, and that most of our DNA didn’t look like it had any function at all. There was obviously a lot more going on than we’d be able to glean just by looking at the sequence.

ENCODE is a follow-up project, in which researchers used a huge variety of techniques to probe the functions of all of the parts of our DNA, not just the segments that contain obvious genes. They looked for enhancers that can control the expression of genes in other parts of the genome. They screened all of the RNA in cells to find new pieces of micro-RNA, a type of gene-controlling molecule we didn’t even know about when I went to graduate school. They tested which parts of the genome were wrapped up in chromatin, a sort of deep storage system, and which were open for business in different types of cells. And on and on. In short, they examined what every piece of the genome was doing under as many different conditions as they could.

Besides finding that most of the genome is probably doing something to earn its keep, ENCODE has illuminated the scope of the problem biologists now face. It’s huge.

A graphic accompanying Brendan Maher’s excellent news feature on the project shows what ENCODE has accomplished so far, and how much work remains just to finish its initial phase. For example, the investigators have looked at only 120 of an estimated 1,800 transcription factors, proteins that control gene expression directly, and they’ve only looked at those factors in a subset of the cell lines they set out to study. That one snippet of the work produced a massive amount of information by itself.

Even after ENCODE finishes, what we’ll have will be more of a pamphlet than an encyclopedia. Cultured human cell lines are a great tool for laboratory studies, but they only partly mimic the behavior of the cells that make up a real human, which in turn vary from person to person and within a single person over time. ENCODE is giving us a two-dimensional view of a system that’s at least five-dimensional. That’s not to minimize the project; the team has made astonishing progress, but it’s just a start.

After doing the rest of the cultured cell experiments, biologists will have to figure out the results, which raises a whole new problem. I can’t tell you what all of the ENCODE data mean. Neither can the people who generated them. Besides the 30 new papers (and their supplementary online sections), the project has also produced databases, software, and other analytical tools so scientists can dive into the results directly. The conclusions of the new papers are just the bits that the experimenters thought were most interesting. As happened with the human genome sequence, people will be digging new publications out of these data for years.

Right now, we’re like astronomers looking at millions of smudges of light we can see with a new telescope, and it’s just dawning on us that those aren’t stars. They’re galaxies.

ENCODE is also part of a trend that’s raising tough ancillary questions for scientists and science publishers. Though its principal investigators undoubtedly see the project as worthwhile, $185 million is a lot of money, and the reality of government-sponsored science is that it’s a zero-sum game. Despite what some big science proponents claim, funding for consortium-based “factory research” studies such as this necessarily comes at the expense of individual investigator-led projects. In an environment where thousands of promising young researchers are scrambling for grants, can we be sure this was the best way to spend those funds?

From the publishers’ perspective, big science is fraught with disputes over credit, concerns about oversight and data integrity, and fundamental questions regarding the proper length and format for a paper. It’s not even clear that a project like this should be published in a conventional journal; perhaps the data should simply go online, accompanied (or not) by a few comments from the lead scientists. As the ENCODE juggernaut keeps rolling along, and as subsequent, even bigger projects follow it, it might not even be possible to crank out papers for each new batch of work.

But this, too, is an expected result. This is what science does: uses what’s possible to redefine what’s possible. The ability to sequence a gene becomes the ability to sequence a genome becomes the ability to sequence a thousand genomes. When our minds can’t accomodate the new information, we’ll just have to expand them.

References:

1. Nature 489, 57–74 (06 September 2012) doi:10.1038/nature11247

2. Nature 489, 75–82 (06 September 2012) doi:10.1038/nature11232

3. Nature 489, 83–90 (06 September 2012) doi:10.1038/nature11212

4. Nature 489, 91–100 (06 September 2012) doi:10.1038/nature11245

5. Nature 489, 101–108 (06 September 2012) doi:10.1038/nature11233

6. Nature 489, 109–113 (06 September 2012) doi:10.1038/nature11279

Cool Project, Odd Name

Researchers in France have just published a description of a new tool for ecological scientists. As Nature Methods explains in an accompanying press package:

Animals disperse from their habitats for a variety of reasons, including environmental change and habitat fragmentation due to human activity. Studying the factors that affect this process is not easy: existing setups trade off between scale and environmental control. Small laboratory setups allow control of climatic variables, but they do not realistically mimic field conditions and can typically be used for only small organisms. Large-scale field experiments lack environmental control.

Jean Clobert and colleagues fill this gap with the Metatron: an infrastructure of 48 habitat patches on four hectares of land in southern France. Temperature, humidity and light in the individual patches of the Metatron can be experimentally controlled. The patches are connected by flexible corridors presenting varying degrees of difficulty to a dispersing animal. In pilot experiments, the researchers used the Metatron to study lizard and butterfly dispersal. The setup will be useful to study the dispersal of many organisms and to determine how dispersal is affected by changing environmental conditions.

It’s a great idea, and apparently it’s open for business; scientists at other institutions can now submit research proposals to conduct work at the facility. There are a couple of amusing quirks in the announcement, though. First, it seems odd that Nature felt the need to embargo this publication, considering the sponsors have already set up a public web site describing the Metatron. Second, I can’t help wondering why the team didn’t check Google before settling on that name. Now they’ve risked angering some Talmudic scholars, and also set themselves up for unfavorable comparisons with a much funnier predecessor:

Open Access vs. Local Politics

Someone just asked me what I thought of Michael Eisen’s op-ed piece that came out in the New York Times a couple of weeks ago. Eisen wrote about a new bill in Congress that would roll back a NIH policy requiring NIH-funded researchers to submit copies of their publications to the National Library of Medicine’s publicly accessible web site. As Eisen explains:

But a bill introduced in the House of Representatives last month threatens to cripple this site. The Research Works Act would forbid the N.I.H. to require, as it now does, that its grantees provide copies of the papers they publish in peer-reviewed journals to the library. If the bill passes, to read the results of federally funded research, most Americans would have to buy access to individual articles at a cost of $15 or $30 apiece. In other words, taxpayers who already paid for the research would have to pay again to read the results.

This is the latest salvo in a continuing battle between the publishers of biomedical research journals like Cell, Science and The New England Journal of Medicine, which are seeking to protect a valuable franchise, and researchers, librarians and patient advocacy groups seeking to provide open access to publicly funded research.

The bill is backed by the powerful Association of American Publishers and sponsored by Representatives Carolyn B. Maloney, Democrat of New York, and Darrell Issa, a Republican from California. The publishers argue that they add value to the finished product, and that requiring them to provide free access to journal articles within a year of publication denies them their fair compensation. After all, they claim, while the research may be publicly funded, the journals are not.

I work for some of those journals, and don’t agree with the policy their lobbyists are promoting here. That said, I’m not entirely persuaded by the open access argument Eisen promotes. I’ve described some of my concerns on this blog already. Briefly, I don’t think the open access movement is really about making research “free.” It’s mainly haggling over price and billing.

The public absolutely should have direct access to the results from taxpayer-financed research, without having to pay a second time. By charging exorbitant per-article access fees and subscription rates, subscriber-supported journals are putting profit over public interest. Of course most of them are private corporations, so they’re supposed to act selfishly. That’s why we need a regulation that requires them to release these papers to the public within a reasonable time frame.

That said, the business model Eisen supports isn’t truly free. Open access journals such as the PLoS family of publications invariably charge a hefty “page fee” for researchers to publish their work. They also make a considerable amount of money from advertising. This has led to a booming industry of “open access” journals, some of which are little more than rebranded vanity presses. Don’t let the charitable-sounding description fool you; open access journals, even the really good ones, are still very much about profit.

Not that there’s anything wrong with that. I make my living from those profits. Indeed, while Eisen and other open access proponents often point out that peer reviewers work for free, they seldom mention the rest of the hardworking staff required to publish a credible journal. At journals such as Science and Cell, for example, someone with the title “Research Editor” has to receive the deluge of submitted manuscripts, triage them, distribute them to appropriate peer reviewers, evaluate the reviewers’ comments, and ultimately decide what to accept. Good research editors are not easy to find, and they absolutely don’t (and shouldn’t be expected to) work for free. For journals that also have news sections, as all of the really big ones now do, there are also news editors and writers like me. If we want to continue to have that added value in research publications – and the evidence is that everyone does – then we have to figure out how to pay for it. There’s also the cost of page design, archiving, and for journals that still have a paper edition, printing and distribution.

The real distinction between subscriber-supported and open access journals, then, is not whether they are in business to make a profit, but who pays and how much. In open access, the researchers pay through their taxpayer-funded grants and the advertising costs of the equipment and services they buy. In the subscription model, readers pay. So the taxpayers ultimately pick up the tab in both cases, just by different mechanisms.

Back when journals were only available on paper, and anyone could get access to them through the library system, the public could read the research they’d paid for at no cost. It just took awhile through inter-library loan. Now we expect everything to be available online, so the NIH open access policy forces the papers to be released that way. As I said, I think that’s appropriate. Yes, someone could still go to the library and ultimately get access to all of the papers, but in the 21st century we shouldn’t require that.

I think the solution is for journals that are currently subscriber supported to move to a business model that’s more like open access. The NIH policy is a good nudge in that direction, as it mandates public release of the papers, but only after a six-month grace period. While the subscriber-supported journals can still charge for immediate access, the policy puts them on notice that they’d better come up with a new plan for the long term. As PLoS and others have demonstrated, that doesn’t have to mean working for free.

So why did Maloney and Issa push a bill that would derail this evolution in science publishing? Well, Maloney’s Congressional district includes the US corporate headquarters of mega-publisher Elsevier, and Issa’s district is adjacent to two other Elsevier offices. Just sayin’.

Who’s More Productive? No, How.

There’s a common belief that science shouldn’t try to answer “why” questions. Instead, it should focus on what it’s good at: answering “how” questions. I wondered whether that was really true, so I compared the relative productivity of Who, What, When, Where, Why, and How, and ranked them according to their PubMed publication records. Here are the results:

Productivity of questions

Productivity of questions

While this seems to bear out the conventional wisdom – How is more than fivefold ahead of Why – it suggests that Why is not completely unproductive, particularly when compared to Who, What, When, and Where. Indeed, Why’s 83 citations trounce Who’s three and When’s one, and we can only wonder what What and Where (zero citations each) have been up to.

If you’re wondering how How maintains its lead, the key seems to be diversification. Just a peek at the first five of How’s 554 papers indicates an incredible breadth of interests:

How's publications

How's publications

Middlemen, Marketing, and a Modest Proposal

During the heady days of the dot-com bubble, “disintermediation” was one of the hot buzzwords. E-commerce proponents proclaimed the death of stores, the shortening of supply chains, and the impending arrival of a new world in which producers sold their products directly to consumers.

What the cool kids didn’t realize was that in many industries, middlemen were doing a lot more than just ringing up purchases and stocking shelves. People would rather buy a gallon of milk and a pound of ground beef from the local grocer than order a whole cow direct from the farm, no matter how much theoretical savings the latter strategy could reap. Even in cases where e-commerce did eliminate the local stores, it often created whole new classes of middlemen in their place. Blockbuster died as Netflix appeared, record stores only shut after iTunes opened, and Amazon danced on Woolworth’s grave.

Ironically, the news business has undergone the slowest, strangest, and least predictable change in the whole e-commerce marketplace. A decade ago, it seemed simple. Journalists, regardless of their medium, sell information. The internet makes information available for free. Therefore, Old Media should die and people should get their news directly from the sources. So why hasn’t that happened yet?

Like the cow, primary news sources are a poor substitute for the finished products most people want. Yes, anyone with enough persistence and time could probably contact Arab and Israeli political leaders, the NBA players’ representatives, and the lead scientists from that cool study that came out last week, but persistence and time are in short supply. It’s worth hiring someone to do the legwork for you and write up a cogent summary. It may not be popular in some circles to admit this, but journalists provide real value to society.

Unfortunately, we’ve already gone a long way toward eliminating them. Newspapers are closing down, consolidating, or shrinking. The radio dial is now a wholly-owned subsidiary of Clear Channel. TV newsrooms have shrunk dramatically. Instead of news carefully researched and reported by journalists, we get stories hastily dashed off by a few remaining employees. Despite skyrocketing demand for information, we’re faced with a plummeting supply of the very people who make it understandable.

Or at least we would be, if another industry hadn’t stepped in to fill the void. While reporting jobs continue to vanish, the marketing and public relations industries are growing by leaps and bounds, often hiring the same folks who previously wrote objective news. A journalist used to make dozens of phone calls to get the details, quotes, sound bites, and images to flesh out a story. Now he only has to look at the press release and download the b-roll.

The internet makes this new PR-driven world accessible to anyone. Organizations that couldn’t have afforded a public relations campaign in the past can now set up an account on a press release service and start publishing their own “news.” If that sounds like the “direct from primary sources” utopia, don’t be fooled. Press releases come from people with a specific angle on the story. Ideally, reporters would take these releases as starting points, following up with their own questions and then tracking down sources with divergent opinions. In reality, the declining number of journalists leaves the remaining ones swamped with work, increasing the odds that they’ll just rewrite the press release and run it as a story.

This problem is particularly acute in science reporting. I skim through piles of press releases about new research findings just about every morning. Some of them then appear, more or less verbatim, on hundreds of news outlets later in the day. It’s too easy to say that those reporters should have waited until they could research the story properly. In our media-saturated environment, 60 seconds could be the difference between getting a top-ranked headline on Google News and vanishing off the bottom of the list. Better never than late.

Of course there’s a whole parallel universe of science bloggers, public-minded researchers, and celebrity science popularizers who now make a living presenting smart analysis of the latest findings. Unfortunately, that’s an opt-in system; people who don’t seek out good, well-reported science news are becoming less and less likely to encounter it.

This isn’t a problem we can just shrug off. Science underpins every aspect of modern life, and the biggest challenges humanity now faces cannot be addressed without it. If you’re wondering how a major political party can survive being taken over by climate change denialists, or how vaccine-preventable diseases can suddenly reappear in industrialized countries, or why we can’t seem to fix our healthcare system, here’s your answer. A scientifically illiterate public is at the mercy of shysters and lunatics.

Effective journalism isn’t sufficient to solve ignorance by itself, but it is essential. Even if we did an impeccable job of teaching science in public schools – and we don’t – we’d still need thoughtful reporting on new research to be the rule, not the exception. So how do we get there?

We should eliminate the middleman. This time, though, let’s eliminate the right one.

First, instead of putting out press releases, scientific journals, funding agencies and academic institutions should add one small requirement for all publications: the authors must write a brief, plain-language summary of the work, explaining the significance of their results for the general public. This summary must be written by the authors, not a paid PR person; if the name of the summary’s author doesn’t appear in the citation for the paper, it’s plagiarism. The PLoS “Author’s Summary” is a good model for this.

Second, all journals that want to be indexed in PubMed or other publicly-sponsored databases should be required to upload their tables of contents – with links to the abstracts and plain-language summaries – to a central site on their publication date. This site must be open to everyone, and should use a uniform format for all of the summaries. The full text could be behind a paywall on the publisher’s own site, but the title, abstract, and summary would all be easy to access without navigating through weird interfaces or restrictions.

Third, no journal or institution would be allowed to release information about publicly-funded research ahead of time, “under embargo,” to a select group. The publication date would be the day everyone would have access to it, including reporters.

Finally, the teeth: any paper accompanied by a press release or sent out under embargo would be automatically banned from consideration for future tenure or grant review. Yes, I’m serious.

What would this Draconian intervention accomplish? For starters, it would eliminate the PR-driven stories that are the crux of the problem. With no press releases, news outlets could either skip science entirely, or do real reporting on it. There’s no doubt the public is interested in science, so the publications that make the investment in reporting on it would draw the hits. Those that aren’t interested in doing it right could focus on something they’re better at, such as gossip.

“But it will put the PR people out of work,” I hear someone shout. Perhaps they’ll be able to get jobs in journalism now.

Some might object that eliminating embargoed access would slow down science reporting. Nobody would be able to print a news story the same hour the research paper came out, so the public would hear about new findings days or even weeks after the peer-reviewed publication appeared.

To that, I say “good.” Besides ensuring that all public discussion about the work can refer to the primary published data, it might help relieve some of the breathlessness of stories about “revolutionary new” discoveries. Scientific publications are not hurricanes; hearing about them a week later won’t kill anyone.

These ideas aren’t new. I’ve talked to several scientists and journalists who’ve made similar suggestions, but the conversation always ends with “of course it’ll never happen.”

It certainly won’t if we accept that attitude. Instead, let’s move the idea forward and see how far we can push it. Scientists often complain about the quality of science writing, so here’s something they could do about it. If they instead decide to throw their hands up and declare this proposal impossible, they should make a counterproposal – or stop complaining.

Comments are open.

Readbacks and Researchers

Recently, there’s been a major debate in the online science journalism community about a common but little-discussed practice in the news business: readbacks. That’s what we call the article excerpts journalists sometimes send to sources ahead of publication, during the fact-checking process.

The current discussion began with some comments investigative journalist Trine Tsouderos made on This Week in Virology. Seth Mnookin has an excellent summary of where the debate went from there. The short version is that a whole lot of writers and scientists have now weighed in on how, when, and whether science journalists should allow their sources to see pre-publication drafts.

So far, the conversation has included a whole slew of thoughtful, provocative blog posts, online comments, and tweets, plus one astonishingly misinformed essay. I don’t have any new arguments to add at this point, but I want to weigh in for two reasons.

First, one of the main lines of argument in the smart part of the debate came from Al Dove, so I have a unique opportunity to muddy the discussion by having two A. Doves presenting somewhat contrary opinions. If we can somehow involve law, go-kart racing, and photography, we’ll have complete pandemonium. More seriously, as someone who’s been a scientist, a journalist, and a debate coach, I think I can crystallize the main arguments pretty well.

Many, if not most of the scientists who’ve weighed in on this favor having science journalists send them extensive readbacks, perhaps even whole drafts, so they can ensure that the final article is accurate. The psychologists from Cardiff take this position to naive extremes, but Al Dove seems to represent a more mainstream view. To wit: scientists should be able to check story drafts because science is complicated, and when journalists screw it up they do the public a major disservice.

While it’s true that the subject is often complicated, and that journalists sometimes screw it up, neither of those things is unique to science. Modern finance, Federal policymaking, and patent law are also very complicated, and journalists have unarguably screwed up when reporting about those things, too. That doesn’t convince me that Wall Street executives, government bureaucrats, and attorneys should be given the right to edit news stories about them.

A free press – which I take to mean one that reports independently of what its sources might want said – is in my view the second most important pillar of a modern democracy, right after free speech. If we can speak our minds and be informed by independent reporters, we can bootstrap all of the other rights we might reasonably want. Giving any group – even scientists – a special pass from that independent scrutiny is placing one foot on a slippery slope to a very bad place. Having scientists check news stories before publication wouldn’t be catastrophic by itself, but it would open the door to all manner of special pleading by other interest groups. Science is unique in many ways, but this can’t be one of them.

From the journalists, the prevailing view seems to be “we’ll write what we want, and do our own fact-checking as we see fit.” I certainly respect and follow that approach, but part of my fact-checking is providing readbacks when I think it’s appropriate. I have the luxury of working mostly for monthly publications and other clients with relatively long deadlines, so when I’m not sure of an explanation I’ve written, I usually have time to run it past a source. If my sources offer corrections, I take them into consideration, but never feel obligated to make precisely the changes they’ve suggested. It is, after all, my name in the byline.

And that brings us to what I think is the crux of the matter. Scientists are accustomed to treating a publication as final. Peer review takes place before the journal article goes to press, and multiple revision cycles are the norm. Once the paper is published, it becomes part of the permanent scientific record, so accuracy is crucial.

Accuracy is also important in journalism, but our version of peer review comes after publication, in the form of public discussions, letters, and subsequent stories by other journalists. Do you disagree with the way I explained your work? My name is on the story. Call me out.

Politicians do this reflexively, throwing “the media” under the bus every time a reporter says something they dislike. Businesses do it with press releases, sending up huge billows of responses to any coverage that doesn’t suit them. Scientists don’t need to go to those extremes, and frankly shouldn’t, but the modern media landscape certainly provides plenty of opportunities to talk back. If you do it often enough, you might even find yourself explaining your science directly to the public on a regular basis. Then we all win.

Accentuate the Negative

When a clinical trial fails, everybody loses: the patients who participated hoping to benefit, the patients who didn’t participate but hoped to get the promising new drug once it hit the market, the researchers who dedicated thousands of hours of their time to it, and of course the trial sponsors, who are usually out many millions of dollars with nothing to show for it. With all that pain, it’s perfectly understandable that the sponsors and researchers often don’t get around to publishing their results.

Daedalus and Icarus

Even failed experiments can be informative. Image courtesy kamikazecactus.

Understandable, but wrong. As two clinical researchers argue in a commentary in last week’s Science Translational Medicine, the tendency to bury negative clinical trial outcomes makes a bad situation even worse. In an accompanying press release, they explain:

In this situation “scientific information on the efficacy — or lack of efficacy — and safety — or lack of safety — of the investigational agents is not available to the research community, and the opportunity to learn from unsuccessful clinical trials is eliminated,” [Michael] Rogawski says.

For example, Rogawski says that it is assumed that the mouse models used to identify new drugs to treat epilepsy have high predictive value, because every marketed antiepileptic drug has demonstrated activity in the screening models. But “this assumption could be erroneous, because we do not know if there are drugs that were effective in the models but did not exhibit efficacy or had unacceptable side effects in clinical trials and were therefore terminated by their sponsors.”

Besides sandbagging subsequent work, the authors argue that failure to report negative results is unethical. Many patients sign up for a trial and put themselves at risk on the assumption that their sacrifice will do some good, furthering humanity’s understanding of disease. Sponsors who bury negative results are breaching that contract.

Rogawski and his co-author, Howard Federoff, go on to argue that the FDA has the power to change this unfortunate situation. Apparently, the FDA Amendments Act of 2007 may have given the agency the authority to mandate that all clinical trial results be published, regardless of whether the trial met its goals.

It’s always hard to tell how accurate scientists are when they’re trying to interpret legal statutes, but I hope Rogawski and Federoff are right, and that other science regulators are paying close attention. That’s because the “positive bias” in publication isn’t limited to clinical trials; it’s afflicts all of science.

Indeed, people have been moaning for years about the tendency to bury negative results, and the corrosive effect that has on future research. So far, though, efforts to correct it have been entirely voluntary. If regulators actually have been given the authority to force publication, it could be a big step forward. Failed experiments still won’t be much fun, but if we can learn something from them, at least they won’t be complete losses.

Elsevier Makes Good: Original Wakefield Takedown Now Free

Awhile back, I blogged about a particularly insidious glitch in the biomedical literature, in which a fraudulent study that caused enormous harm was available for free, while a contemporary – and strikingly prescient – commentary that eviscerated that study was locked behind a paywall. Now, thanks largely to the perserverance of TWiV co-host Rich Condit, this situation has been fixed.

Rich followed up on his original request, pulling strings with several contacts he’d made at Elsevier. He forwarded the conclusion of the saga this morning:

Dear Professor Condit

On behalf of Dr Astrid James, I can confirm that both the commentary and article in question are now free to access, subject to (free) registration on www.thelancet.com

Many thanks.
Richard Lane
Web Editor
The Lancet

Now, not only can the general public read the infamous and now retracted paper from Andrew Wakefield, which purported to show a link between MMR vaccination and autism, but also the brief, thorough debunking of that paper by Robert Chen and Frank DeStefano. Thank you Rich, and thanks to the folks at Elsevier who finally got the point.

The only remaining question is why, in light of Chen and DeStefano’s analysis, The Lancet even published Wakefield’s paper in the first place.