Tag Archives: science policy

Threading the NEIDL

After two long days of shooting and hundreds of hours of editing, the American Society for Microbiology and This Week in Virology are proud to release the documentary “Threading the NEIDL.” This video provides an unprecedented (and probably never-to-be-duplicated) look inside a state-of-the-art Biosafety Level 4 laboratory. BSL-4 labs are the ones that work on the most dangerous human pathogens, and the National Emerging Infectious Diseases Laboratories (NEIDL) at Boston University is the newest facility with labs built to the incredibly strict standards this type of science requires.

As you’ll see, we were able to get a detailed view of the inner workings of the NEIDL because it’s not operating yet. It seems that opening a high-level containment lab in the middle of a densely populated city didn’t sit well with the neighbors, and lawyers and government officials are still haggling over its fate. Meanwhile, this brand-new $200 million building is mostly empty. The silver lining is that the TWiV team was able to get inside and see spaces that would normally be inaccessible to outsiders. We also tried on some BSL-4 suits to see what it’s like to work in that environment, and chatted at length with the scientists who hope to do research in the NEIDL’s containment labs if and when they open.

The video ends on a positive note about the need to study dangerous pathogens, but it’s not a promotional piece. Community objections and BU’s handling of them get some coverage, and we went into more detail about those controversies in the associated podcast episode we released back in September. I’m still not convinced downtown Boston was the best place to stick the NEIDL. However, it does seem to have been built well, and I’d really hate to see a nine-figure sum of NIH funding flushed down the toilet now that the deed is done. Check out the video and make up your own mind:

Polio vs. bin Laden

Like most Americans, I felt a visceral surge of patriotic pride when I heard that we’d killed Osama bin Laden: pride in the President who ordered and orchestrated the bold raid, pride in the military that carried it out, and pride in a nation that, after nearly a decade of half-measures and absurd distractions, finally hit the primary target in the battle against Al Qaeda. We got the sonofabitch. I loathe violence and oppose capital punishment, but I’m pragmatic enough to understand that in some very, very rare cases, the only correct solution is a bullet. This was one of those cases.

But we have to be wary of surges of patriotism. They come from a place beyond reason, a primitive part of our tribal psyches, and leaders of all sorts have exploited them ruthlessly throughout history. As soon as the initial cheering dies down, we must set aside the easy jingoism and force ourselves to ask some hard questions. Exactly what did we just do, what did it cost, and was it really, objectively worth it? While asking those questions, though, we have to appreciate the conditions under which the decisions were made.

Fortunately, more than a year after that fateful night in Abbottabad, this phase of post-game analysis is still in full swing. Many foreign policy wonks bemoan the precedent the raid set: landing troops inside a sovereign nation without permission to carry out what can only be described as an assassination. US relations with Pakistan were fragile before; the two nations are barely speaking now.

Public health and infectious disease geeks like me, meanwhile, have been wringing our hands over a Central Intelligence Agency operation that took place before the raid. At the time, the Agency had a major problem. They’d pinpointed a residential compound where they strongly suspected bin Laden was holed up, but they couldn’t be sure. The man they saw pacing the courtyard in reconnaissance photos could have been him, or could have been some wealthy hermit with no connection to terrorism. When you’re on the cusp of dropping dozens of Navy SEALs into someone’s yard in the middle of the night and triggering an international incident, you really want to be sure you have the address right. The spooks decided to get creative.

Enlisting a Pakistani doctor named Shakil Afridi, the CIA mounted a campaign to collect DNA samples from children in Abbottabad, under the cover of a hepatitis B vaccination program. Afridi deployed nurses first in the slums of the city, then moved the campaign closer to the suspected bin Laden residence. If children from the target house carried bin Laden DNA, it would mean that Osama was most likely there. The effort appears to have failed, but the President ultimately took an enormous risk and authorized the raid anyway.

The public health community has been fiercely critical of this “fake” CIA vaccination campaign ever since. This recent post from infectious disease blogger Maryn McKenna typifies the general sentiment, which was exactly how I felt when I first heard about this incident. Laurie Garrett, meanwhile, took a more sensationalist tone in a shrill (and inaccurate) rant last month.

Other than a snide remark on Twitter when the news first broke, I’ve kept pretty quiet about this story. That’s because when I learned more details of the operation and started analyzing it, I became – and remain – deeply conflicted about it.

Public health has always been the poor cousin of medicine. When it works perfectly, nothing happens. There’s no dramatic moment when the patient’s heart re-starts, no miraculous recovery as the antibiotics take effect, no made-for-TV journey from sickness into health. The gains from vaccination, sanitation, and prevention are enormous and real, but they accrue on statistical tables that are very hard to explain to non-specialists.

As a result, public health workers have spent decades earning the public’s trust, especially among the poor and marginalized populations that suffer disproportionately from preventable diseases. When the amply-funded CIA made the cynical decision to hijack that hard-won trust for a short-term military objective, people in the field were understandably upset. That said, the Agency’s decision was not entirely arbitrary, and its impact wasn’t necessarily as bad as some commentators have implied.

Let’s get some perspective here. This DNA-collection effort was part of a broader project to stop a prolific mass-murderer. Osama bin Laden did not have blood on his hands; he was swimming in it. This was a man who had directly masterminded the deaths of several thousand innocent civilians, triggered a war that killed and maimed tens of thousands, and continued to promote and design attacks to kill many thousands if not millions more. He made no secret of his desire to annihilate Israel, destroy Western civilization, and roll human rights back to the Middle Ages. The only meaningful distinction between bin Laden and Hitler was that the former did not control sufficient weaponry to scale up his plans – yet. Public trust in vaccination campaigns is certainly important, but it is not all-important. Other priorities do exist.

Nor was the CIA’s betrayal a unique affront to public health. Yes, there have been some setbacks in the World Health Organization’s vaccination campaigns in Pakistan since the bin Laden raid, but it’s not clear the CIA caused those problems. Indeed, the polio eradication campaign, originally slated to be done in the year 2000, has been struggling for years. Pakistan isn’t even the toughest challenge for WHO vaccinators at the moment – Nigeria is. The covert DNA screening surely didn’t help matters, but it’s ridiculous to presume, as Garrett apparently does, that everything in public health would be going perfectly if the CIA hadn’t done this.

Finally, while it certainly wasn’t a very well-structured vaccination campaign, and it was clearly done with ulterior motives, it’s probably not correct to refer to the CIA effort as “fake.” As far as we can tell, the Agency obtained and distributed real hepatitis B vaccine. I presume they chose an injected vaccine because it provided better cover for DNA collection than the oral polio vaccine. A proper hepatitis B immunization requires three doses spread out over six months, but partial immunization still provides some protection against the virus. A handful of Pakistani kids may avoid liver cancer because of this. That’s not a justification, just an observation.

So was the hunt for bin Laden ultimately worth the cost? I still don’t know. I do know that it’s neither fair nor useful to judge the entire operation through a single narrow lens, with information that was only available after the fact. The DNA collection may have failed, but nobody knew it would fail at the outset. The President authorized the raid anyway, but given how thin the evidence was, it would have been perfectly reasonable for him to call it off, leaving the world’s top terrorist alive to plot his next attack.

Foreign policy is full of the nuances, tradeoffs, and uncertainties of a deeply imperfect world. We can and should hold our government to account for its actions, and we can and should point out the real harms that come from undermining public health efforts. But a decision can only be bad if another option was clearly better, and in this case I can’t quite bring myself to condemn the CIA’s choice.

Just don’t make a habit of it, okay guys?

A Chat with Mike Osterholm

I got a call last night from Mike Osterholm, noted epidemiologist and member of the National Science Advisory Board for Biosecurity (NSABB). He wanted to talk about H5N1 flu – if you don’t know why, scroll down to the previous few posts.

First, I want to thank Mike for calling. We had a good conversation in which I think we came to understand each others’ viewpoints a bit better, though we still disagree strongly on some key issues. That means that as I had hoped, the H5N1/censorship debate is finally moving forward. To clarify my own position, and also help those who aren’t in direct touch with NSABB members, here’s a synopsis of what we talked about. Bear in mind that this was not an “on the record” interview, so I won’t be quoting Mike, but I’m pretty sure he won’t mind me discussing our conversation publicly. If I misstate anything, I hope he posts up in the comments to correct it.

We talked briefly about his alleged dis of Peter Palese at the New York Academy of Sciences meeting Thursday night. Mike says he was misquoted, and I believe him. Let’s move on.*

Next, we talked about the controversial case-fatality rates for H5N1 flu. Mike has some valid methodological criticisms of some of the serological surveys, as I expected he would. I disagree with his assessment of the situation, but that’s not really relevant to what I see as the main point here. My real complaint is that the figures being cited for H5N1 fatalities shouldn’t be presented to the public in the first 50 words of an editorial – or anywhere, unless accompanied by a detailed explanation of their limitations. Comparing those rates to the mortality rates for, say, 1918 flu, is particularly misleading; the numbers were calculated by different standards. Indeed, if we apply a sufficiently strict definition of “case,” we can generate eye-popping figures even for the relatively mild 2009 H1N1 swine flu virus. Whether the 59% fatality rate for H5N1 is off by one order of magnitude or twelve isn’t the point. The point is that public statements that cite that figure and use it for apples-to-oranges comparisons are going to get called out by virologists as propaganda.

But all of that is really a sideshow. The main question we need to focus on is whether it’s appropriate to redact key data from a paper that reports unclassified research. Mike was blunt and consistent in stating that the NSABB wants this to be an isolated incident, not a general approach. That’s reassuring. Unfortunately, I’m afraid it’s not up to him. My biggest concern about the NSABB recommendation is that it sets a dangerous and potentially corrosive precedent, a possibility I don’t think the committee gave adequate weight.

I’m not terribly worried about the NSABB, or about what happens with the H5N1 data, so long as they’re eventually published. I’m worried about the much broader picture. We’ve now seen a government advisory board pressure publishers to censor what they publish, based on entirely theoretical “security concerns.” Because said advisory board’s media statements created a public panic over the issue, the publishers really can’t negotiate this censorship from a position of strength. They pretty much have to go along with it. Prior restraint on publication is an extreme, radical intervention, even if it’s applied indirectly, as this was. US courts have consistently (and correctly) ruled that this type of censorship is only barely acceptable if there is an absolutely compelling public interest in suppressing the information. Theoretical threats don’t cut it.

So would it now be okay for the Minerals Management Service, citing undefined “security concerns,” to pressure a publisher over an article on fracking? Can a politically-motivated appointee at HHS threaten to smear researchers who publish data on abortion? Could a rightward-facing President appoint an advisory panel to lean on climate change publications? These are the sorts of scenarios that worry me, and they seem much more likely than the idea of terrorists synthesizing super-flu. There’s a long history of governments – even in democracies – trying to censor information, while the number of deadly non-state bioterror attacks still stands at precisely zero. From where I sit, the potential harms of censorship far outweigh any benefits of trying to conceal the data, especially since that horse is already out of the barn.

To his credit, Mike didn’t push the bioterrorist angle. Instead, he brought up a new argument: that a biohacker hobbyist might try to generate the new H5N1 strains just for bragging rights. While that’s theoretically possible, it’s hardly a compelling reason to cripple an entire field of research. There are already regulations in place requiring people to work with these viruses under stringent containment conditions, and animal experimentation involves more paperwork than a satellite launch. It would be virtually impossible to do such work undetected. So while a few exceptionally motivated (and deep-pocketed) biohackers might try to do this, they’re extremely unlikely to succeed. Even if they did, it’s not at all clear that the resulting virus would be dangerous.

Ultimately, this comes down to the question that always plagues decisions based on the “precautionary principle.” Yes, we should avoid doing something that risks causing harm, but where do we draw the line on risk? Giving every nation equal access to enriched plutonium would probably be a bad idea. But should we shut down the CERN supercollider, lest it destroy the universe? Or mandate that everyone wear hardhats outdoors to guard against falling space debris?

Clearly, many risks don’t justify the interventions that would be required to mitigate them. I think that the risks of publishing the new H5N1 studies fall firmly into that category. Others are certainly free to disagree, but the time to have that discussion is before the work is done. And that was one point on which Mike and I seem to be in violent agreement; it was nice to chat, but neither of us wants to find ourselves having this same conversation again.

* (2012.2.4 18:30) I did not intend to minimize what was apparently an extremely acrimonious exchange, regardless of exactly what words were used (see comments below). My intent here is to move the public discussion forward, though, and dwelling on questions of tone won’t do that. Mike, if you’re reading this, I suggest giving Peter a call to see if you two can bury the hatchet. It wouldn’t hurt to drop Vincent a line, too. Reasonable people can disagree, but pissing people off, as you apparently did, won’t help your case.

Who’s Afraid of the Big, Bad Bioterrorist?

The recent dustup about The H5N1 Bird Flu Plague That Will Kill Us All (not) has brought the topic of “bioterrorism” into the media spotlight again. This is an issue I’ve been following for several years, and during that time I’ve come to a conclusion that’s pretty much the opposite of everything we’ve been told: biodefense is largely a waste of money.

Let’s start with the current definition of the problem. The US government has prepared a list of “select agents” that are considered potential biological weapons. Researchers who work on these agents have to get special clearances, and an entire multi-billion-dollar industry of defense contractors has sprung up to help the nation prepare for a terrorist attack using one of these weapons. I have two problems with this list.

First, several of the listed “agents” shouldn’t be treated as biological threats at all; they are chemical weapons. Botulinum toxin and ricin, for example, both appear on the list, but an attack with either of these toxins would bear no resemblance to a biological outbreak. Their toxicity is generally acute, so the first responders to such attacks would be police and firefighters, whereas the first responders to a true biological attack would be physicians and nurses. Toxins don’t reproduce or spread, so the response would be about containing and decontaminating the scene, not tracking contacts and cases. The list conflates two completely different types of threats. But perhaps that’s just a technical gripe.

The second problem is much more serious. Eliminating the toxins, we’re left with a list of infectious bacteria and viruses. With a single exception, these organisms are probably near-useless as weapons, and history proves it.

There have been at least three well-documented military-style deployments of infectious agents from the list, plus one deployment of an agent that’s not on the list. I’m focusing entirely on the modern era, by the way. There are historical reports of armies catapulting plague-ridden corpses over city walls and conquistadors trying to inoculate blankets with Variola (smallpox), but it’s not clear those “attacks” were effective. Those diseases tended to spread like, well, plagues, so there’s no telling whether the targets really caught the diseases from the bodies and blankets, or simply picked them up through casual contact with their enemies.

Of the four modern biowarfare incidents, two have been fatal. The first was the 1979 Sverdlovsk anthrax incident, which killed an estimated 100 people. In that case, a Soviet-built biological weapons lab accidentally released a large plume of weaponized Bacillus anthracis (anthrax) over a major city. Soviet authorities tried to blame the resulting fatalities on “bad meat,” but in the 1990s Western investigators were finally able to piece together the real story. The second fatal incident also involved anthrax from a government-run lab: the 2001 “Amerithrax” attacks. That time, a rogue employee (or perhaps employees) of the government’s main bioweapons lab sent weaponized, powdered anthrax through the US postal service. Five people died.

That gives us a grand total of around 105 deaths, entirely from agents that were grown and weaponized in officially-sanctioned and funded bioweapons research labs. Remember that.

Terrorist groups have also deployed biological weapons twice, and these cases are very instructive. The first was the 1984 Rajneeshee bioterror attack, in which members of a cult in Oregon inoculated restaurant salad bars with Salmonella bacteria (an agent that’s not on the “select” list). 751 people got sick, but nobody died. Public health authorities handled it as a conventional foodborne Salmonella outbreak, identified the sources and contained them. Nobody even would have known it was a deliberate attack if a member of the cult hadn’t come forward afterward with a confession. Lesson: our existing public health infrastructure was entirely adequate to respond to a major bioterrorist attack.

The second genuine bioterrorist attack took place in 1993. Members of the Aum Shinrikyo cult successfully isolated and grew a large stock of anthrax bacteria, then sprayed it as an aerosol from the roof of a building in downtown Tokyo. The cult was well-financed, and had many highly educated members, so this release over the world’s largest city really represented a worst-case scenario.

Nobody got sick or died. From the cult’s perspective, it was a complete and utter failure. Again, the only reason we even found out about it was a post-hoc confession. Aum members later demonstrated their lab skills by producing Sarin nerve gas, with far deadlier results. Lesson: one of the top “select agents” is extremely hard to grow and deploy even for relatively skilled non-state groups. It’s a really crappy bioterrorist weapon.

Taken together, these events point to an uncomfortable but inevitable conclusion: our biodefense industry is a far greater threat to us than any actual bioterrorists.

For comparison, Timothy McVeigh pulled a Ryder rental truck full of ammonium nitrate and fuel oil (both very easily obtained) in front of a Federal building, and killed 168 people. The 9/11 hijackers killed almost 3,000 people and blew up the headquarters of the United States military, using box cutters and basic flight training. In 2000, a couple of guys in an inflatable boat full of explosives totaled an American battleship. I could go on, but hopefully you get the point: conventional weapons are orders of magnitude more effective for terrorism than biological ones.

Astute readers may have noticed that I mentioned a single exception on the select agent list. I’m talking about smallpox, and the reason it’s an exception is interesting: it’s a good weapon only because we successfully eradicated it.

Had the World Health Organization focused on controlling smallpox instead of eradicating it, there would have been continued pressure to develop improved vaccines, and likely continued vaccination. That’s the pattern now with poliovirus, which has been incorporated into one of the standard combination vaccines that kids receive.

But because the WHO focused on eradicating smallpox instead, they stuck with the primitive vaccine originally developed in the 18th century by Edward Jenner. Once the world was certified smallpox-free, vaccination stopped. Now, nearly everyone born in the past forty years or so is susceptible to this highly contagious, highly lethal virus.

Smallpox would still be a very poor choice for bioterrorism, but for a different reason than the rest of the select agents. A terrorist group that actually got ahold of it could probably culture it and deploy it without much trouble – in many ways it would be easier to work with than anthrax. However, there is no question who would be hit hardest by a new global pandemic of smallpox: the poor countries. The US already stockpiles hundreds of millions of doses of smallpox vaccine and antivirals. Once an outbreak was identified, it would be straightforward to track and stop, at least in the developed world. The people who would suffer and die would be precisely the ones most terrorist groups are trying to represent. Military types call that “blowback,” and it’s a very bad thing.

So what should we do? First, stop panicking. Terrorist groups have repeatedly said they want biological weapons, but that’s either propaganda or fantasy. The groups that have actually pursued such weapons have found that they’re a complete waste of resources. However, the fear of bioweapons has caused governments around the world – particularly in the US – to spend billions of dollars on technologies they will never need. Spending the same money on our ailing public health system would have been a much better investment.

We should keep stockpiling vaccines and antivirals against smallpox, against the tiny but nonzero probability that some terrorist might actually have the contradictory combination of resourcefulness and stupidity necessary to get ahold of this virus, then deploy it. We should probably continue researching improved smallpox vaccines, too, if only because the work could yield useful insights about other, more relevant viruses. But most importantly, we should drastically reduce the size of our current “biodefense” efforts, which have unambiguously proven themselves to be more harmful than beneficial. Hiring and training more people to work with select agents is the problem, not the solution.

Open Access vs. Local Politics

Someone just asked me what I thought of Michael Eisen’s op-ed piece that came out in the New York Times a couple of weeks ago. Eisen wrote about a new bill in Congress that would roll back a NIH policy requiring NIH-funded researchers to submit copies of their publications to the National Library of Medicine’s publicly accessible web site. As Eisen explains:

But a bill introduced in the House of Representatives last month threatens to cripple this site. The Research Works Act would forbid the N.I.H. to require, as it now does, that its grantees provide copies of the papers they publish in peer-reviewed journals to the library. If the bill passes, to read the results of federally funded research, most Americans would have to buy access to individual articles at a cost of $15 or $30 apiece. In other words, taxpayers who already paid for the research would have to pay again to read the results.

This is the latest salvo in a continuing battle between the publishers of biomedical research journals like Cell, Science and The New England Journal of Medicine, which are seeking to protect a valuable franchise, and researchers, librarians and patient advocacy groups seeking to provide open access to publicly funded research.

The bill is backed by the powerful Association of American Publishers and sponsored by Representatives Carolyn B. Maloney, Democrat of New York, and Darrell Issa, a Republican from California. The publishers argue that they add value to the finished product, and that requiring them to provide free access to journal articles within a year of publication denies them their fair compensation. After all, they claim, while the research may be publicly funded, the journals are not.

I work for some of those journals, and don’t agree with the policy their lobbyists are promoting here. That said, I’m not entirely persuaded by the open access argument Eisen promotes. I’ve described some of my concerns on this blog already. Briefly, I don’t think the open access movement is really about making research “free.” It’s mainly haggling over price and billing.

The public absolutely should have direct access to the results from taxpayer-financed research, without having to pay a second time. By charging exorbitant per-article access fees and subscription rates, subscriber-supported journals are putting profit over public interest. Of course most of them are private corporations, so they’re supposed to act selfishly. That’s why we need a regulation that requires them to release these papers to the public within a reasonable time frame.

That said, the business model Eisen supports isn’t truly free. Open access journals such as the PLoS family of publications invariably charge a hefty “page fee” for researchers to publish their work. They also make a considerable amount of money from advertising. This has led to a booming industry of “open access” journals, some of which are little more than rebranded vanity presses. Don’t let the charitable-sounding description fool you; open access journals, even the really good ones, are still very much about profit.

Not that there’s anything wrong with that. I make my living from those profits. Indeed, while Eisen and other open access proponents often point out that peer reviewers work for free, they seldom mention the rest of the hardworking staff required to publish a credible journal. At journals such as Science and Cell, for example, someone with the title “Research Editor” has to receive the deluge of submitted manuscripts, triage them, distribute them to appropriate peer reviewers, evaluate the reviewers’ comments, and ultimately decide what to accept. Good research editors are not easy to find, and they absolutely don’t (and shouldn’t be expected to) work for free. For journals that also have news sections, as all of the really big ones now do, there are also news editors and writers like me. If we want to continue to have that added value in research publications – and the evidence is that everyone does – then we have to figure out how to pay for it. There’s also the cost of page design, archiving, and for journals that still have a paper edition, printing and distribution.

The real distinction between subscriber-supported and open access journals, then, is not whether they are in business to make a profit, but who pays and how much. In open access, the researchers pay through their taxpayer-funded grants and the advertising costs of the equipment and services they buy. In the subscription model, readers pay. So the taxpayers ultimately pick up the tab in both cases, just by different mechanisms.

Back when journals were only available on paper, and anyone could get access to them through the library system, the public could read the research they’d paid for at no cost. It just took awhile through inter-library loan. Now we expect everything to be available online, so the NIH open access policy forces the papers to be released that way. As I said, I think that’s appropriate. Yes, someone could still go to the library and ultimately get access to all of the papers, but in the 21st century we shouldn’t require that.

I think the solution is for journals that are currently subscriber supported to move to a business model that’s more like open access. The NIH policy is a good nudge in that direction, as it mandates public release of the papers, but only after a six-month grace period. While the subscriber-supported journals can still charge for immediate access, the policy puts them on notice that they’d better come up with a new plan for the long term. As PLoS and others have demonstrated, that doesn’t have to mean working for free.

So why did Maloney and Issa push a bill that would derail this evolution in science publishing? Well, Maloney’s Congressional district includes the US corporate headquarters of mega-publisher Elsevier, and Issa’s district is adjacent to two other Elsevier offices. Just sayin’.

The Day the Science Died

This afternoon, a coalition of influenza virologists released a statement saying that they are voluntarily suspending research on H5N1 “bird flu” for 60 days. This was in response to the Category 5 hype storm that has accompanied the publication of two papers about this virus. My previous post on this topic (and links therein) provides a quick review for those who haven’t been following this story.

I’m of two minds about the new moratorium. As a scientist, I think it’s moronic. H5N1 flu is biologically interesting, and could become a major public health concern if it ever manages to sustain human-to-human transmission. Though its lethality has probably been vastly overstated, there’s no doubt that it is capable of killing at least some people, under some circumstances. The demonstration that it’s possible for H5N1 to adapt to a mammalian host, even one that diverged from the primate lineage many millions of years ago, shows that we need to step up H5N1 research, not halt it.

However, the biodefense industry’s recent push to whip up fear has completely distorted the public’s perception of this issue. Millions of nonscientists are now convinced that the recent virus transmission work was dangerous, perhaps even foolhardy, and that terrorist groups could easily take advantage of the new findings to kill millions. None of that is even remotely true. Unfortunately, people who are in a panic aren’t capable of rationally evaluating the nuances, so the scientists who’ve been trying to defend ongoing H5N1 work are at a disadvantage. Saying they’ll suspend that work is the only reasonable public relations strategy at this point.

Around the same time the moratorium was announced, a partially overlapping group of virologists sent an open letter to the National Science Advisory Board for Biosecurity (NSABB), giving that board a thump on the head. It was the NSABB that started this whole circus, by calling for the new H5N1 publications to be partially censored. In the open letter, the virologists argue that this censorship is unjustifiably hindering scientific progress. They were apparently too polite to say that deliberately omitting data from a publication in response to a nebulous, entirely theoretical “security risk” is antithetical to the whole scientific enterprise, so I’ll do it for them.

The moratorium should help bolster public confidence in the scientists’ ability to address this issue themselves, while the letter to the NSABB lays the groundwork for a productive debate based on reason rather than fear. Hopefully, in a couple of months everyone will be able to calm down and get back to work.

Coast IRB Stings Congressional Investigators

This is pretty amusing. A few days ago, I got a press release from an Institutional Review Board (IRB) contractor called Coast IRB. IRBs are the organizations that monitor human clinical research to make sure everything is done in accordance with modern ethical standards, so ideally these folks would take their jobs pretty seriously. Apparently Coast IRB does:

On Friday, March 6, 2009, Coast IRB, an Independent review board, discovered that a protocol submitted to it for review of a medical device called Adhesiabloc by a Device Med Systems of Clifton, Virginia, was in fact fraudulent in violation of federal and state law. Upon receipt of proof of the fraud, Coast IRB and its CEO, Daniel Dueber, ordered the immediate termination of the clinical trial, referred evidence to federal and state authorities for investigation and prosecution, and instituted measures to prevent a recurrence.

So far, so good. They found out that someone was trying to set up a shabby clinical trial, and they called shenanigans. Today, though, Coast IRB felt the need to issue a new press release:

In a press release issued on March 10, 2009, Coast IRB informed the public that it had discovered what appeared to be a fraudulent clinical trial submitted to that Independent review board for evaluation. Coast IRB has since learned that the fraudulent trial was apparently commenced as part of a congressional “sting” operation. Apparently at the behest of the U.S. House Subcommittee on Oversight and Investigations of the House Committee on Energy and Commerce, agents submitted false credentials and clinical trial data to Coast IRB and possibly other IRBs to induce them to perform reviews. Evidence of the progress of the trials could then form the basis for arguments critical of the FDA and in favor of greater regulatory oversight.

So they stung the sting operation, and by sending out a press release about it, blew the cover off the whole thing. Was that wrong? Well, the folks at Coast aren’t taking any chances – they’re going on the offensive:

Unless pursuant to a court order or under the auspices of the Department of Justice, the sting could be illegal, violating wire fraud, mail fraud, and state laws against fraud and false credentialing.

First you bust an apparent fake clinical trial, then you out a Congressional sting operation, and now you’re arguing that Congress actually broke the law by initiating the sting. Is this really the right tone to take? At least the parties will have an opportunity to discuss the matter further:

Coast IRB CEO Daniel Dueber had been asked by subcommittee staff to submit to an informal interview prior to giving testimony before the committee on March 19. Following notice from Coast IRB that the fraud had been detected, committee staff informed Coast that the hearing would be postponed until March 26, 2009 and that the chairman of Coast IRB and possibly another Coast official would now have to appear for a “transcribed interview” with committee staff.

Um, did we do something wrong? Did you? Let’s not take any chances – try reminding everyone that we’re on the same side:

“We are doing our level best to ensure protection for subjects of clinical trials under our review, an objective we share with the Food and Drug Administration,” said Daniel Dueber, CEO of Coast IRB.

Honestly, I wish the folks at Coast IRB the best of luck. They’ll probably need it.

Genetic Privacy: An Inalienable Right?

Congresswoman Louise Slaughter (D-NY) has a thought-provoking editorial in the 11 May issue of Science:

The Genetic Information Nondiscrimination Act (GINA) languished in past Congresses for 12 years. But finally, new leadership in the House of Representatives has given the bill its best chance to become law since its introduction in 1995. On 25 April, GINA passed the House by a vote of 420 to 3. The act will prohibit health insurers from denying coverage or charging higher premiums to a healthy individual solely because they possess a genetic predisposition to develop a disease in the future. It will also bar employers from using genetic information in hiring, firing, job placement, or promotion decisions.

She goes on to argue that genetic discrimination is a real and insidious danger, and that the new legislation is critical to stopping it. While I do believe that people are already facing uninformed discrimination on the basis of primitive, misinterpreted genetic tests, I’m not convinced that we should have an inalienable right to keep our genetic information secret from insurers and employers. Currently, insurers can ask if I have a family history of, say, cancer or diabetes. That’s genetic information. So if a test comes along that makes solid predictions from my actual DNA sequence, rather than a potentially flawed inference from vague family history data, why am I suddenly allowed to keep that a secret?

Friendly Fire in the War on Bioterror

Today’s New York Times brings a sobering story:

A 2-year-old boy spent seven weeks in the hospital and nearly died from a viral infection he got from the smallpox vaccination his father received before shipping out to Iraq, according to a government report and the doctors who treated him.

The boy, who lives in Indiana and has recovered, became ill in early March, two weeks after his father’s deployment was delayed and he was allowed to make a trip home. Over the next few weeks, the boy suffered kidney failure and lost most of his skin to the disease, eczema vaccinatum.

By my count, “bioterrorism” has killed somewhere around six Americans. How many have been killed and maimed by the multi-billion-dollar-and-mushrooming “biodefense” response?

Genetically Engineered Pests – Coming To a Field Near You?

Entomologically-minded readers of the Federal Register (you know who you are) might have noticed an interesting item shortly before Christmas: in the 19 December issue, the Department of Agriculture posted this note asking for the public’s thoughts about genetically modifying insect pests. Specifically, they’re working on inserting some choice genes into fruit flies and pink bollworms, then releasing the re-engineered critters into the environment. I’m sure the usual naysayers will soon be screaming about Frankenflies (which, by the way, would be a good name for a band), but this project could actually be a tremendous boon to the environment.

Pink bollworm life cycle illustration
Pink bollworm life cycle, image courtesy USDA

The focus of the effort is actually a decades-old insect control strategy called the Sterile Insect Technique, or SIT. Unlike most pest control approaches, SIT has the potential to eradicate an insect species completely, without any harm to non-target species. Somewhat counterintuitively, the first thing you do in SIT is to build a hatchery and start rearing thousands of the insects you want to wipe out. Then you kill the females, and expose the males to some chemical or physical insult that sterilizes (but doesn’t kill) them. Finally, you release the sterile males into the wild, where they mate with females, who then lay clutches of eggs that will never hatch.

Because many insect species mate only once in a season, each sterile male that competes successfully for a mate wipes out one female’s entire reproductive output for the year. The next season’s crop of wild flies will be that much smaller, but you release the same number of sterile males, giving them an even better chance of bedding an otherwise fertile female. After several generations of this, the wild population will go extinct, and you can fold up the hatchery and go home.

It’s a stunningly elegant strategy, and even more impressively, it works. Edward Knipling, the entomologist who invented SIT, proved that 25 years ago, with Cochliomyia hominivorax, the screw worm fly. Screw worm (another good name for a band) gets its common name from a habit of laying its eggs on live animals, letting the maggots burrow into the flesh (like a screw) to feed on blood and develop. After pupating, the adult hatches out of the now-festering wound and flies away. It’s very gross, and very costly; historically, screw worm infestations were an enormous problem for livestock farmers in the Americas, especially cattle ranchers. Using SIT, Knipling and his colleagues eradicated the fly from this country by 1982. It’s since been extirpated from several other countries the same way.

Now the Department of Agriculture is trying make SIT more efficient. Using current technology, SIT programs are damnably expensive, mainly because of the challenges of separating the sexes of tiny insects (can you tell girl bollworms from boy bollworms? ten thousand times a day?), and making sure that the sterilization works just right. Too much radiation or chemical treatment, and you get dead males. Too little, and you get fertile males. Neither is useful for SIT.

Genetic engineering could address both problems. Inserting an inducible gene regulation system that selectively kills females in the larval stage, and another that sterilizes males, would make the breeding program a snap: just induce the genes, grow up the flies, and release whatever survives pupation. One could also get more sophisticated. How about building a paternally-inherited, female-lethal gene expression system into the flies rather than sterilizing them? Males coming out of the hatchery would mate with wild females, who would then lay eggs that would only produce more males – males that would carry the same trait to the next generation. This sort of genetic pesticide, if it worked, could wipe out all of the females in a few generations, and the whole species right after that.

Beyond a few interesting laboratory findings, all of this is theoretical right now, which is exactly why it’s good to see the Department of Agriculture asking for comments on it. Maybe there will be an informed public debate about the risks and potential of this approach. Or maybe there will be a lot of screeching and shouting from various corners, followed by a permanent stalemate that keeps this promising technique on ice.