Ƶ

Summer 2016 Bulletin

Managing the Benefits and Risks of Nuclear, Biological, and Information Technologies

Project
Global Nuclear Future

On May 10, 2016, the Ƶ hosted a meeting at the University of Chicago on the benefits and risks of nuclear, biological, and information technologies. The speakers included Robert Rosner (William E. Wrather Distinguished Service Professor in the departments of Astronomy & Astrophysics and Physics, as well as in the Enrico Fermi Institute and the Harris School of Public Policy Studies at the University of Chicago), James M. Acton (Co-Director of the Nuclear Policy Program and Senior Associate at the Carnegie Endowment for International Peace), Elisa D. Harris (Nonresident Senior Research Scholar at the Center for International and Security Studies at Maryland), and Herbert Lin (Senior Research Scholar for Cyber Policy and Security at the Center for International Security and Cooperation at Stanford University). The program, the Morton L. Mandel Public Lecture, served as the Ƶ’s 2038th Stated Meeting and included a welcome from Robert J. Zimmer (President of the University of Chicago) and Jonathan F. Fanton (President of the American Ƶ). The following is an edited transcript of the discussion.

Robert Rosner
Robert Rosner, Summer 2016
Robert Rosner is the William E. Wrather Distinguished Service Professor in the departments of Astronomy & Astrophysics and Physics, as well as in the Enrico Fermi Institute and the Harris School of Public Policy Studies at the University of Chicago. He serves as Cochair of the Ƶ’s Global Nuclear Future Initiative. He was elected to the American Ƶ of Arts and Sciences in 2001.

Modern concerns about dual-use technologies emerged in concert with fears about the proliferation of nuclear weapons. The history of dual-use technologies, however, long predates the Cold War and the modern era. For example, the chemical advances underlying the use of fireworks in Imperial China were adapted in the tenth century A.D. to produce fire arrows for use in battle. Arguments about dual-use go back literally millennia.

What has changed is not the balance of dual-use technologies but the ability of modern weaponry to kill and damage human society on vast scales. This dynamic is probably best captured by J. Robert Oppenheimer’s sobering allusion to the two-thousand-year-old Bhagavad Gita, “I am become Death, the destroyer of worlds.”

These words came to mind when Oppenheimer described the Trinity nuclear explosion. That was the point when the physicists involved in the Manhattan Project realized they had unleashed something unique in the history of humankind; namely, the ability to wipe human life off the face of the earth.

What we are faced with today is weaponry that is appropriately referred to as “weapons of mass destruction.” When we talk about dual-use concerns, we are really worried about such weapons. So, what can we say today about the management of the risks inherent in dealing with such weaponry? And the technologies that have contributed to their existence?

In response to such questions, the American Ƶ’s Global Nuclear Future Initiative, which I direct alongside Steven Miller at Harvard and senior advisor Scott Sagan at Stanford, has been taking a comprehensive look at the range of current efforts to constrain dual-use technologies; that is, efforts to create dual-use governance structures with a particular focus on their effectiveness in controlling the spread of technologies that have both beneficial and harmful consequences.

We began with a series of small workshops in 2012 that sought to explore the critical issues surrounding dual-use technologies. These workshops led to a larger meeting held at Stanford University in January 2013, which helped us to narrow our focus. We decided to organize our strategic approach around governance: What have we learned about the potential for dual-use technology control from the decades-long efforts to restrict the spread of technology related to nuclear and biological weapons and, more recently, cyber weapons? We were fortunate to enlist Elisa Harris, one of our speakers this evening, to organize a meeting, held last year in Chicago, that focused on these questions and to address the issue of biological technology herself, and to convince James Acton and Herbert Lin, who will also be speaking this evening, to offer their views on the governance issues in the nuclear and information technology domains, respectively.

Our three speakers tonight have written chapters in the recently published volume Governance of Dual-Use Technologies: Theory and Practice. All three are clearly well versed in the issues that we are going to discuss this evening.

James M. Acton
James M. Acton, Summer 2016
James M. Acton is Co-Director of the Nuclear Policy Program and a Senior Associate at the Carnegie Endowment for International Peace.

About 150 meters from us is the site of CP-1, the world’s first nuclear reactor, a part of the Manhattan Project. This is a reminder that nuclear technology is not civilian technology that happens to have a military purpose; it is military technology that happens to have a civilian purpose.

In 2007, the British political scientist William Walker wrote about how the exceptional nature of nuclear weapons calls for an exceptional kind of cooperative politics. Nuclear technology really does provide the paradigm example of how cooperative governance efforts can have considerable success in restraining the potentially harmful side of dual-use technology.

The system for regulating nuclear technology divides into two largely separate systems. One is the nuclear security architecture, devoted to preventing nonstate actors from acquiring nuclear materials. That is the less developed system; it is a patchwork largely made up of non-legally binding agreements with no verification.

The second system, and the focus of my remarks, is the nonproliferation side, the system designed to stop the further spread of nuclear weapons to states. This is an almost universal, legally binding, verified regime.

Often when people think about the nonproliferation regime, they think of one or two elements, or perhaps both elements at once. The first element comprises the international oversight mechanisms used to deter or detect proliferation. Safeguards implemented by the International Atomic Energy Agency are the most well-known elements of that regime, but not the only ones.

The second element comprises strategic trade controls, the circumstances under which states agree to trade in potentially sensitive nuclear technology. These controls are based on both domestic laws and international coordination.

Although imperfect, the two layers have been remarkably effective over the course of the nuclear age. That effectiveness has largely been facilitated by the specific characteristics of nuclear technology, which stand in contrast to biological and cyber technology.

First, with nuclear technology we are worried about only a few materials: primarily, highly enriched uranium and plutonium, and under most circumstances those materials are conserved. (There is an asterisk here, but I won’t bore you with it!) This permits relatively easy oversight, in contrast to biological organisms, which have the annoying habit of reproducing; and in contrast to cyber, which doesn’t really deal with materials at all.

Second, governments today remain central to nuclear technology. Without government involvement, there would be no nuclear technology. This is very much in contrast to bio and cyber, and it allows for a focus on the actions of states and governments.

Third, nuclear technology has spread to a surprisingly limited degree, especially relative to bio and cyber, making strategic trade controls relatively effective.

Unfortunately, the stresses on this regime, already huge, are growing. Some of those stresses are technical. Some of the new nuclear technologies–and even some not so new nuclear technologies, such as gas centrifuge enrichment plants–present huge detection challenges. Patterns of trade are becoming much harder to monitor because they are increasingly complex.

These technical challenges might have solutions but for the fact that the politics surrounding the regime have become increasingly frozen and acrimonious. States lack the political willingness to enhance the regime.

Perhaps most serious is the lack of political willingness to do something when misbehavior is actually detected. In my chapter for the Ƶ’s new report on dual-use technologies, I discuss in some detail why this acrimonious politics has arisen.

Suffice to say that in spite of the Iran deal, which I think is one of the few bright spots on the horizon, I am not terribly sanguine about the long-term future of the nonproliferation regime. Change is likely to be both difficult and incremental. Probably the biggest opportunity we have for improvements in the regulation of dual-use nuclear technology, and it is a limited opportunity, is with the domestic decision-making processes used to answer questions about whether to develop and deploy new nuclear technologies.

Let me give you an example that I think illustrates the lacuna in this system at the moment. Back in 2009, General Electric Hitachi submitted a license application to the Nuclear Regulatory Commission in the United States to build a new laser enrichment facility. Assessing whether this was a net positive or a net negative was a genuinely difficult decision, I believe.

From the principle of encouraging free enterprise and private companies making profits, it was potentially a good thing. At the same time, the technology posed potential proliferation risks. If the United States were to commercialize this technology, what is the likelihood it would spread to other countries?

The fact that one country has commercialized the technology means that, even if proprietary details of the technology do not leak out, other countries might be inspired to try to recreate it for themselves. That is a demonstration effect. And the process would be speeded up if proprietary details leaked.

Second, what would be the consequences of the spread of this technology? How detectable would small laser enrichment plants be? How easy would they be to safeguard by the International Atomic Energy Agency?

I don’t know the answers to those questions. In fact, I don’t think one can know the answers to those questions without classified information. GE Hitachi’s application required a genuinely difficult cost-benefit analysis. But the remarkable thing is that no attempt was made anywhere within the U.S. government to actually do that cost-benefit analysis.

The Nuclear Regulatory Commission (NRC) interpreted its role as being merely to test GE Hitachi’s ability to handle classified information appropriately. The executive branch and Congress decided that their role was to leave everything up to the NRC.

So, in the end, the NRC licensed this facility without any kind of discussion about the potential proliferation costs or benefits. Now, as it happens, the plant will almost certainly not be built for commercial reasons. GE Hitachi appears to think it won’t be commercially viable.

What this points to is a principle that could, in a highly imperfect yet promising way, enhance the nonproliferation regime: countries developing new technologies should have in place some kind of domestic nonproliferation risk assessment.

We have plenty of historical precedents for this in the area of nuclear technology. In the mid-1970s the United States imposed a domestic moratorium on funding for reprocessing (the extraction of plutonium from reactors). Everybody always attributes that to the administration of Jimmy Carter, but the Ford administration was actually the first to implement the moratorium. The Carter administration made it permanent, a decision that was not purely to do with nonproliferation, although nonproliferation was a factor.

In 1977, the UK government launched a judicial inquiry into the construction of a large plutonium separation plant. That inquiry did consider nonproliferation issues. More recently, the George H. W. Bush administration, in an ambitious plan called the Global Nuclear Energy Partnership, which was intended to develop and commercialize various kinds of new technology, ordered a nonproliferation impact assessment that was published in draft form (but not in final form, because the administration didn’t like what it had to say).

In addition to historical precedents for a nonproliferation impact assessment, other areas of nuclear regulation also offer precedents. A basic principle of nuclear safety is that facilities and activities that give rise to radiation risks must yield an overall benefit. Within the European Union that has translated into a formal, legally binding requirement for member-states to ensure that all new classes and types of practice resulting in exposure to ionizing radiation are justified–in terms of their economic, social, or other benefits relative to the potential harm to health–in advance of being adopted.

If you replace the word health with proliferation, you get a nice summary of the kinds of issues a proliferation impact assessment would address.

Finally, precedents can be drawn from other technologies. Searching for new forms of enrichment technology is in some ways analogous to “gain of function” studies in the biological realm. Biotechnology has an emerging process for determining whether these studies–in which scientists give microorganisms increased destructive capability, with the goal of learning how to cure disease–have net benefits.

The process might be imperfect, even flawed in many ways, but it is a process, and the fact that it is in place stands in stark contrast to nuclear technology. The goal is not to find an excuse not to do nuclear research. The goal is to find a coherent, systematic process for weighing the benefits and risks of such research.

Elisa D. Harris
Elisa D. Harris, Summer 2016
Elisa D. Harris is a Nonresident Senior Research Scholar at the Center for International and Security Studies at Maryland.

The history of efforts to prevent dual-use materials, equipment, and knowledge from resulting in destructive consequences involves many different governance efforts at multiple levels–international, national, local, and even individual. The measures that have been adopted over the last half-century have taken many forms. In some cases, as with treaties and national law, they have been legally binding, but they have also taken the form of guidelines and standards, even of codes of conduct for scientists.

In the brief time that I have, I am not going to try to talk about all of the governance measures. Instead, I want to leave you with five takeaways and then close with some policy recommendations for addressing what I consider to be the biggest weakness in the governance regime for biological technology.

My first takeaway is that biological technology can cause harm either as a result of deliberate malfeasance or because of inadvertence. Pathogens that are used to develop vaccines can escape from the laboratory and cause disease. Equipment used to study the underlying biological properties of pathogens can be used to make pathogens more transmissible and more lethal. Knowledge gained from research about extinct pathogens, such as the 1918 pandemic virus, can be used not only to strengthen disease surveillance efforts but to resurrect deadly disease agents. So the challenge in the biological area is both to prevent dual-use technology from being used intentionally for hostile purposes and to prevent unintended harm.

My second takeaway follows logically from the first: governance efforts in the biological technology area are much broader than in the nuclear or information technology area. As in the case of nuclear technology, these governance measures have focused first on nonproliferation, on preventing other countries from acquiring capabilities that could be used to cause harm; for example, by making biological weapons.

A clear example of such a governance measure is the Biological Weapons Convention, which bans the development, production, and possession of biological weapons. Also like the nuclear area, the biological area has strategic trade controls, including export controls at the national level and international controls through the Australia Group–all designed to try to deny countries access to materials and technology that could be used to develop biological weapons.

In addition to these nonproliferation measures, a variety of biosafety measures have also emerged. These have been designed to ensure that individual scientists do not put human, animal, or plant health at risk in their work with dangerous pathogens. Examples include guidelines developed for biosafety by the World Health Organization and the guidelines for research involving recombinant DNA developed and put in place by the National Institutes of Health in the 1970s.

My third takeaway is that, to a much greater extent than in the nuclear or information technology area, September 11 and the anthrax letters that followed were watershed events in efforts to govern biological technology.

As many of you will recall, five people died and seventeen people were injured as a result of the anthrax letters sent to members of Congress and the media. But the confluence of the attacks in New York and Washington and the dispersal of high-grade anthrax material through the mail led many, both inside and outside the government, to conclude that the question was not whether bioterrorists would strike again but rather a matter of when.

The U.S. government responded to this new threat by following two parallel tracks. The first was to try to make it harder for terrorists and others who would do harm with biological agents to get access to them.

This was done through a variety of means, including tightening the controls on access to dangerous pathogens such as anthrax and plague and other so-called “select agents.” The Patriot Act barred certain restricted persons, including individuals from countries on the government’s terrorist list, from having access to select agents. Other legislation required individuals and facilities working with these select agents to register with the federal government and to undergo background checks.

These and other governance efforts were intended to prevent individuals who would do harm with dangerous pathogens from gaining access to them. In my judgment, however, this first track was undercut by the second track that the United States pursued. That is, the unprecedented increase in funding for medical countermeasures to protect people from biological attack. At the National Institutes of Health (NIH), the number of grants for work on potential biological warfare agents increased from 33 in the period from 1996 to 2000, to almost 500 from 2001 to January 2005. The amount of funding at NIH for civilian biodefense research increased from $53 million in fiscal year 2001 to more than $6.7 billion (budgeted) for fiscal year 2016.

The number of specialized laboratories where scientists can work with these dangerous pathogens tripled from about 400 in the early 2000s to an estimated 1,500 high-containment labs today. And in 2014, the last year for which data are available, 316 facilities and some 11,000 people had been approved by the government to work with select agents.

My fourth takeaway is that this proliferation of scientists and facilities involved in research on dangerous pathogens has taken place against a backdrop of extraordinary advances in science and technology. Today it is increasingly easy to modify pathogens to make them more lethal, harder to detect, and harder to protect against.

A harbinger of this came in early 2001 with the publication of the Australian mousepox experiment, in which scientists in Australia trying to develop a contraceptive to control the mouse population, ended up creating a highly lethal virus.

The National Ƶ of Sciences recognized the significance of this research and constituted a special committee to look at the potential risks posed by life sciences research. The committee, known as the Fink Committee for its chairman, MIT professor Gerald Fink, issued a report in 2003, aptly titled “Biotechnology Research in an Age of Terrorism.” In the report, the committee warned that dual-use biotechnology research could cause harm, “potentially on a catastrophic scale.”

To help address this problem, the Fink Committee recommended that seven categories of “experiments of concern” should be subject to oversight locally, at the institutions where the work was being carried out and, if necessary, on a national basis. Four years later, a federal biosecurity advisory board that was created in response to the Fink Committee report issued its own recommendations for research oversight of what it called “dual-use research of concern.”

My fifth takeaway is that the U.S. government’s response to the recommendations in these reports to address the risks posed by the most consequential types of dual-use research has been wholly inadequate. After the 2007 biosecurity advisory board report was issued, the government took more than five years to release even an initial U.S. policy on oversight of dual-use life sciences research. The announcement of this policy was prompted by controversy within the scientific community about research that was being carried out involving avian influenza viruses, work that was making those viruses more transmissible via respiratory droplets between mammals.

The 2012 policy was very narrow, applying only to research that was being funded or conducted by the U.S. government that involved one of fifteen specific select agents. The policy did not apply to classified biodefense or other research, or to relevant research not being funded by the U.S. government.

Two more years passed before the U.S. government released guidance for how the institutions covered by the dual-use oversight policy were to carry out the required oversight. The impetus for this additional policy guidance was another controversy–over research to create viruses similar to the 1918 pandemic influenza virus and to enable the pandemic strain to evade the human immune system.

Today the life sciences research community is more divided over the ethics and risks of certain types of dual-use research than at any time since the emergence of recombinant DNA technology in the early 1970s.

The U.S. government has taken note of the controversy within the scientific community and the concerns about the safety of some of this work. In response, it has enacted a funding pause and begun what it is calling a deliberative process for the most controversial dual-use research studies, so-called gain of function research: studies that add new functions to already dangerous pathogens.

The funding pause and the deliberative process create an opportunity both to develop an effective policy for gain of function research and to remedy the weaknesses in the U.S. government’s approach to the most consequential types of dual-use research more broadly. The U.S. government should use the authority it has under existing law to make oversight of this narrow but important class of experiments mandatory, something that is not the case today.

To eliminate the loophole for classified research and work that is being done at private facilities, the oversight requirement should apply to all relevant research, not just research funded or conducted by the U.S. government. The United States also needs to undertake a serious effort to develop common approaches and practices internationally. If we address this issue here in the United States but other countries pursue similar research without effective oversight arrangements, we will not be much safer than we are today.

Herbert Lin
Herb Lin, Summer 2016
Herbert Lin is Senior Research Scholar for Cyber Policy and Security at the Center for International Security and Cooperation and Research Fellow at the Hoover Institution, Stanford University.

Information technology is a very broad term. For example, it includes pencils and telephones. They can be used for bad things, but nobody is talking about dual-use controls or governance of pencils and telephones.

So I am going to talk about cyber weapons, which I define as information technology artifacts that are used to affect other information technology systems in some negative way, such as destroying or stealing the information inside.

A cyber weapon has two parts: a penetration part that allows you to get into the computer system of interest; and a payload part that tells you what you are going to do once you are inside. The two parts are very separate. The importance of this separation is clear when you think, for example, about a computer system controlling a centrifuge or a generator. Computer science skills, hacking skills, are needed to get into the computer. But in order to tinker with the centrifuge or the generator one needs knowledge of centrifuges and power plants. I am pleased to report that most hackers do not know much about centrifuges or generators.

Also important to note is that cyber weapons may or may not be designed to be self-propagating. A cyber weapon can be designed to go after one target and one target alone. Such a weapon may appear on another system but not do any damage there. Alternately, a cyber weapon could be designed to do damage everywhere it winds up.

Advances in information technology are driven by the private sector (in which IT is ubiquitous), not by the government. Thus, the technology base for the penetration part of a cyber weapon is ubiquitous. You can find free hacking tutorials on the Internet. You can order your laptops at Amazon or Dell.

The most interesting part about cyber weapons is that there is no consensus that the use of cyber weapons is bad. No nation wants cyber weapons used on it, but every nation wants to be able to use cyber weapons on somebody else.

How you square goodness and badness in that space is obviously not a question of technology. When we do it to them, it is a good use. When they do it to us, it is a bad use.

Another point about cyber weapons is that what you do with them is essentially infinitely scalable. You can use them to do nothing; that is, you could go inside a system, look around a little bit, and then leave without affecting the system at all. Or you can use them to go in and create havoc in the system. And you can do many things in between. The level of effect is scalable to anything you want.

Cyber weapons are already ubiquitous. Most of us have not been the victim of a nuclear or biological weapon. But I would bet that most of us have been the victim of a cyber weapon at some point. Everybody has experienced spam. But why is spam a cyber weapon? Because it wastes your time. You have to delete it, and it makes your system less available than it would otherwise be.

The fact that you can scale the effects to essentially anything you want means these weapons are very usable. And that makes them highly desirable for policy purposes.

When trying to govern cyber weapons, we can take three approaches. The first is to think about the acquisition of cyber weapons, about acquiring capability. The second is to somehow regulate their use. The third is to institute confidence-building measures or norms of behavior that guide how people or governments should behave in cyberspace.

Getting a handle on acquisition is essentially impossible. Misguided teenagers are out there right now creating cyber weapons. Fifty years ago, I was one of them. (Back then, it wasn’t illegal to use cyber weapons.)

What about governing the use of cyber weapons? There are already agreements constraining the use of cyber weapons, but only to the extent that nations agree that cyber weapons are governed by the laws of war. If in an armed conflict you use cyber weapons in ways that cause certain types of damage prohibited by the laws of war, then those uses would be constrained. But the nations of the world have not all agreed that the laws of war apply to cyberspace.

Nations do sometimes agree to norms of behavior. For example, two nations might agree to cooperate with each other in suppressing the criminal use of cyber weapons. That would mean both nations have to agree that a certain use of a cyber weapon is a bad thing and to criminalize that use. But such norms are not binding, except to the extent that each nation individually says, “I will pass a law that will make this particular use illegal.”

When thinking about the governance of cyber weapons, you need to consider four things. First is that the technology base is ubiquitous. It really is everywhere.

Second, we have seen that cyber weapons are just too useful to give up as an instrument of national power and influence. They are not just for destroying things. They are used for spying too. And the spying part is really important. Every country’s intelligence agencies make use of cyber as just one more way they can spy on everybody else.

Third, because cyber weapons are infinitely scalable, they have no clear threshold. In contrast, once a nuclear weapon goes off, no matter how small, everybody will notice.

Finally, many paths lead to expertise. The expertise needed to create a cyber weapon is not confined to PhDs, or to master’s degree students, or to anybody with any degree at all.

The road to getting a good handle on the governance of cyber weapons will be long. Probably the most important step we as a nation could take would be to decide whether we are better off in a world in which everybody is penetrable or everybody is defendable. Until we get a handle on that, we are going to be talking out of both sides of our mouths.

© 2016 by Robert Rosner, James M. Acton, Elisa D. Harris, and Herbert Lin, respectively

To view or listen to the presentations, visit /creationanddestruction.

Share