7. DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields
Previous Chapter: GENETIC TAKEOVER | Cryptocommerce
Historically, violence has been declining while we have made progress on a variety of factors that humans may value. Cryptocommerce can help us cooperate better in advancing progress. Biotechnology is unlocking healthier lives for more people, while remote sensing and robotics enable production at lower cost, using less energy, and creating less waste. Soon, we may be able to produce entirely new materials with molecular precision. We need technological progress to continue enabling more and more valuing entities to achieve what they value. So far, so good. Now let’s look at the dark side. Technology is raising the stakes of our civilizational game, and could threaten its very existence. There are at least two traps to steer away from.
Avoid Trap 1: ‘Small Kills All’ Via Technological Proliferation
Technologies are becoming cheaper and more widely available. CRISPR developed from the biotechnology achievement of the decade to being used routinely by lab students. Democratization of powerful technologies also means a proliferation of dangers.1 Nuclear weapon development is a large undertaking employing special materials to make distinctive products. In contrast, the cost to chemically manufacture strands of DNA could soon be affordable to the point of printing targeted deadly proteins or cells at home.2 Further out, atomically precise manufacturing could allow ordinary humans to employ ordinary materials to create entirely unexplored military applications.
The risks from increasingly widespread weaponizable technology are sometimes summarized under the dynamic of “small kills all”, i.e. fewer people can cause ever greater destruction.3 It may only be a question of time until one of the 7 billion active players seeks large-scale civilizational destruction and can acquire the technology to cause it. To understand the danger, let’s zoom in on a few risky technological dynamics from nukes, to robotic violence, to biotechnology and nanotechnology.
Dangerous Dynamics
Nuclear Weapons
Nukes are a great example of how we historically fared with massive risks for large-scale violence. The protections against an accidental nuclear launch were terrifyingly weak, while the degree of lying to the public about the safety controls in place was terrifyingly high.4 Public opinion was also frightening. In a 1945 Fortune poll on US attitudes about the atomic bomb, more than half of respondents, knowing about the devastating consequences for Japan, agreed “we should have used the two bombs on cities just as we did”, and almost a quarter agreed “we should have quickly used many more of them before Japan had a chance to surrender”.5
FLI’s Unsung Hero Award acknowledges that it may only be due to a few individuals that we lived to tell the tale.6 In 1983, Stanislav Petrov ignored the Soviet early-warning detection system that had falsely detected five incoming American nuclear missiles. This prevented an all-out US-Russia nuclear war. Vasili Arkhipov vetoed his submarine captain’s decision to launch nuclear weapons at US ships on the false assumption that they were under attack. This was at the height of the Cuban Missile Crisis, so we can assume Arkhipov single-handedly averted a new nuclear war. The degree of governmental secrecy, the willingness to use nuclear strikes, and the accidental close calls is worrying with respect to future warfare. Nukes are still dangers hiding in plain sight.
Fifty years since the 1970 Nuclear Non-proliferation Treaty - in which non-nuke-holding nations agreed to not proliferate while nuke-holding nations agreed to pursue disarmament - five authorized nuclear weapons states still have more than 13000 warheads in their combined stockpile.7 The Intermediate Range Nuclear Forces Treaty between the US and Russia has ended and North Korea is making progress toward a nuclear-tipped intercontinental ballistic missile that could reach LA in 30 minutes. The fragile security infrastructure around nukes is withering away, making accidents more likely with every year that goes by.8 We have not been safe from catastrophic weapons for a long time. It is thanks to outstanding individuals and blind luck that we have survived thus far. For civilization to survive, we shouldn’t rely on those factors.9
Listen to Daniel Ellsberg, the author of Doomsday Machine, discuss nuclear risks.
Robotic Violence
Automation of cooperative arrangements will lower their cost and increase cooperation. Automation will also be the main multiplier for our ability to cause violence by lowering its costs. Robotic weapons, drones controlled by far-away individuals, and self-driving cars turned into land-based missiles are all weaponizable physical technologies. A single trigger can cause an automated reaction resulting in many deaths. Tiny autonomous drone swarms may be programmable to assassinate victims based on their biological features.10
In response to activism, Google and some other large corporations agreed to not cooperate with governments in building autonomous military drones. But many of these companies are still working towards full self-driving automobile software as fast as they can. While self-driving cars are not built to kill people, they are built on insecure operating systems. If adversaries can take over millions of cars and run them into crowds at high speed, these cars become lethal land-based drones. In addition to technology whose primary purpose is violence, we are surrounded by unsecurable technology which could be used as weaponry.
Importantly, robotic violence is not constrained by AI. We smile at naive media depictions of the Terminator robot. But, in terms of destructiveness, those robots aren’t a long way from current Boston Dynamic robots. Make those robots a factor of ten cheaper, more battery efficient, and mount a machine gun on them. The same robots that do adorable dances in today’s promotional videos become a means for automated destruction.11
Biotechnology Risks
Biotechnology is responsible for some of human health’s biggest successes, such as antibiotics and vaccines. The response to the COVID-19 pandemic, with a record time of a few days to a vaccine design, highlights the imperative to advance biotechnological progress. At the same time, the pandemic gave a taste of the potential magnitude of anthropogenic bio risks as existential risks to humanity. Within a few centuries, and recently accelerated by breakthroughs like CRISPR, we went from figuring out what pathogens are made of to creating new, genetically modified viruses with different properties.
Toby Ord is particularly worried about gain-of-function research, which makes pathogenic strains of infectious agents with higher transmissibility, lethality, or vaccination resistance. For instance, avian influenza A/H5N1 virus can be very dangerous to us, but its natural version is not airborne transmissible between humans. Nevertheless, in 2011, two laboratories identified an avian influenza variant which became transmissible through the air between ferrets. Based on this research, shortly after point mutations of the H5N1 virus genome were screened to identify mutations which would allow airborne spread. Gain-of-function research is aimed at improving pandemic prevention and defense.
But our poor biosafety track record is especially concerning with respect to agents designed for optimal lethality. Ord draws attention to the Dugway Proving Grounds case. Dugway was established by the US military to work on chemical and biological weapons. In 2015, it accidentally distributed samples containing live anthrax spores, instead of the expected inactivated spores, to US labs across eight states. In response to such accidents, the US placed a moratorium on this research that is now rescinded.12 Terrifying biosecurity accidents regularly happen all over the world and the COVID-19 pandemic showed how quickly pathogens can spread globally.
In addition to critical accidents, we should also worry about deliberate threats. Pathogens could be engineered to be vastly more lethal and transmissible than SARS-CoV-2 and similar viruses. More than 1,200 different kinds of potentially lethal bio-agents, including bacteria, viruses, and parasites, have been studied for use as biological weapons.13 As with the history of biological accidents, governments currently have the greatest capacity for deliberate abuses. There are international treaties guarding against bioweapons use, such as the 1972 Biological Weapons Convention. It prohibited their development, production, acquisition, transfer, or stockpiling. Nevertheless, we lack the means to monitor compliance on either a national or international level. Some believe North Korea has compiled a biological arsenal containing anthrax, bubonic plague, smallpox, and yellow fever.14 Likewise, as shown in the Dugway case, the US is clearly not twiddling their thumbs when it comes to the exploration of biological weapons.
Nanotechnology Risks
At our current rate of scientific breakthroughs, we stand ready to unlock radically novel technologies. If Richard Feynman is correct that “the principles of physics [...] do not speak against the possibility of maneuvering things atom by atom”, one breakthrough will be molecular nanotechnology.15 This future level of nanotechnology involves having atomically-precise control of the structure of matter. Starting from current chemical synthesis, we may develop progressively better nanosystems capable of macromolecular self-assembly. The long-term goal is atomically precise manufacturing, i.e., the ability to use coordinated nanosystems operating with atomic precision to produce macroscale products with unprecedented performance.16
Assembling materials from the bottom up would allow us to make any physically possible structure at minimal cost or waste, similar to an incredibly precise 3D printer. Nanoscale medical applications could help repair and rejuvenate our bodies. Nanoscale processing of materials could boost material welfare. Artificial molecular machines could target threats to nature's molecular machinery, reversing damage to our planet’s ecosystem. Combined with information technology, new materials could host entire networks of sensors, actuators, and communication devices. Summed up, “asking what atomically precise manufacturing systems can do with materials is much like asking what computers can do with information.”17
While this level of nanotechnology is likely still many years off, it will come with substantial risks. In addition to boosting existing weapons’ performance and enabling miniature mobile systems, it may lead to new types of weaponry. With desktop-scale nanofabs, separating uranium isotopes could come at the ease of today’s 3D printing. Swarms of billions of insect-sized drones could communicate, monitor their surroundings, and act in a coordinated fashion. Our inexperience with nanotechnology threats is especially worrying due to our poor historic track record of handling much simpler technologies such as nukes.
Avoid Trap 2: ‘Civilization Suicide’ Via Single Point of Failure
We should be terrified of a scenario in which technological proliferation allows individual malicious actors to cause global destruction. To deal with such risks, it can seem tempting to explore solutions that prop up powers of governments - or even establish a world government - in the search for safety.
The initial appeal of an actor with global reach comes from the sobering observation that, unless we can effectively monitor and prevent risks globally, there will always be pockets of actors that can go rogue, killing everyone. To mitigate this danger, a world government would have to orchestrate the monitoring of every individual in contact with potentially dangerous technology. When a threat is detected, it would have to orchestrate its rapid elimination. This would mean equipping such a government with an unprecedented level of surveillance and physical military weaponry.
Unfortunately, powerful large-scale actors in charge of world-destroying technologies may not mitigate the risks from rogue individuals. After all, they are also made up of individuals. The risk of malicious individuals causing world destruction is still present. In fact, if the organizational structure amplifies the reach of the large number of individuals making up the actor, it may further amplify their malice. According to adverse selection, whenever we create an opportunity to have power, we create competition for it. The greater the power, the greater the race to capture it, and the less likely that competitors will all have purely good intentions.
Even in a best case where the centralized organization could filter out any bad actors it attracts, it is in danger. If the “good guys” get to the top, they become a target for the “bad guys” who seek to take over. As long as individual actors in a world government are vulnerable to external extortion, a benign world government would be vulnerable to it.
The development of nuclear weapons technology was hastened because large governments that felt threatened thought having them would improve their security. Currently, we have a sufficient balance of power among multiple nuclear powers, and all of them have so far succeeded at making decisions that did not result in their use against an adversary. If we had a world-dominating central body in the possession of nukes that feels their existence is threatened, would this be a world that is more or less likely to deploy nukes than our current world? Hanson warns that, by creating a single point of failure, such a power’s suicide could quickly become a civilizational suicide.18
Even if we could imagine a perfect central actor without those critical dangers, how could we possibly hope to create it? We are not in such a world and, for reasons just discussed, not everyone thinks it would be a good idea. Actors with good reasons to try to prevent a global takeover by a world government may decide to attack first to avoid otherwise becoming the victim of such a strike. Simply the fact that we have a multipolar deployment of nukes internationally already narrows the set of possible survivable futures. It could be fatal to try to transition to a unipolar world, or even to let things proceed to the point where a sudden transition to a unipolar world is plausible.
The temptation to create powerful central coordination, for instance to solve small kills all risks from the proliferation of technologies, is Robin Hanson’s best guess for the Great Filter that humanity may face. The Great Filter is one explanation as to why we have not found alien life even though, by the sheer number of potential life-carrying substrates, and the time it would have to contact us, we should have. There may be a Great Filter all life has to pass to evolve into an interstellar species. No-one other life form has passed it yet, which is why we don’t see anyone. Either this filter is behind us, such as abiogenesis (life arising from non-life) in which case our chances to survive are favorable. But it may be ahead of our civilizational path, such as a risk that could wipe out humanity.19 It would be sad, if by attempting to protect civilization, we would end up preemptively destroying it.
Dangerous Dynamics
Pervasive Surveillance
In the past, certain actors have tried to take over the world. Some made a certain amount of progress but ultimately failed. As technology advances, they may not. Technologies for direct monitoring and control of a population are becoming available, becoming cheaper, and coming to the attention of authoritarian states everywhere. Even in relatively free countries, we must assume any private information held by companies or governments is accessible by sophisticated hackers from both our own and other unfriendly nation states.
Google’s ‘Don't Be Evil’ dynamics led to an internal collective sense that employees would not perform evil actions. Yet those good employees let Google gather a very dangerous amount of aggregate information on people in one vulnerable place. There were multiple attempts to move Gmail from plain text to encrypted ciphertext but they never succeeded. Even if Google does not abuse its power, its existence is a tremendous temptation. For Google, external abuse has occurred both when it was hacked by a foreign nation-state and when it was served with U.S. national security letters, demanding its customers’ information. Such national security letters are unconstitutional and a severe violation of democratic accountability.20 But they still happen, because having many people's plain text email at one company is too juicy a target. We must prevent situations occurring in which those who are not evil can still be used for evil.21
Widespread surveillance may become more common. Individuals voluntarily install surveillance in their homes, wear sensors on their body, and carry a tracked pocket-sized computer that can perform its own external surveillance. The smaller and cheaper sensors get, the harder it will be to know if someone is equipped with monitoring sensors. You may even be unaware that someone has put such sensors in your clothes or your body. Information that becomes public will remain public, waiting for steadily-improving Machine Learning-supported correlation to make sense of it. The importance of knowledge will lead to races to obtain knowledge, improve sensors, and perform more powerful surveillance.
Robotic Enforcement
A similar trend drives the automation of physical enforcement. Using humans as violent peace enforcers is too expensive, especially relative to the cost of robotic enforcement. The U.S., today’s dominant military power, already uses drones for military purposes. A human being is in the control loop, but that person has no physical ‘skin in the game’ in any battles. And nothing ensures humans will stay in those control loops, especially when the stakes are raised. A 2020 Libyan drone airstrike was already conducted with no human in the loop.22
Despite our best efforts, in a shooting war, formerly unethical actions may be reconsidered and people do what they think is necessary. At that point, how corruptible the rules guiding robotic enforcement are becomes a very present danger. Corruption can be vastly more amplified by automated enforcement mechanisms than by human ones. The Nuremberg Trials and reactions to the Milgram Experiment helped create a norm that if we receive an order that is illegitimate by society’s current moral code, we are individually responsible for not obeying it. The friction in human enforcement, critical to our safety, is not present for automatic enforcement. It will take longer to build a system with built-in friction against corrupt orders. The path of least resistance is to deploy systems before we know how to include such friction.
The dangers of transitioning to pervasive automated enforcement through robotics are coming. Our failure to put adequate controls in place with respect to nukes is frightening when we project it onto this much more incremental, less visible threat. A robotic takeover could happen by an existing power gradually expanding its surveillance or military force. This is a boiling frog scenario. Democracy and the constraints against totalitarianism in the U.S. are much weaker than we thought. If the vulnerabilities of democracy in the still-dominant superpower are a cause for concern, so is the absence of accountability in its rising rival, China. If rapid deployment of a technological advantage in surveillance or enforcement technologies results in a global regime, such a single actor would create an existential risk in itself.
Decentralize Defense: Multipolar Active Shields
We need a civilization that decreases our vulnerability to the small kills all trap without going to the opposite extreme and creating the power suicide trap. For a moment, let’s dream big, and design a possible system that, even if not easy to build, if successful would actually address the risks from biotechnology, nanotechnology, and robotics without creating single points of failure.
First, we need to prevent small kills all attacks from succeeding, even when we can’t stop them. To successfully defend against an attack we must have a deployed fabric of systems that detect and react to attacks based on trustworthy mechanisms. This is called an active shield.23 In theory, widely deployed sensors could collect data in an encrypted format about relevant technologies usage. An automated protection system could analyze the data and enforce the action required to prevent an abuse of the technology. If we cannot prevent robotic enforcement, we must ensure it is verifiably used for legitimate defense only. Similar to the white blood cells of our immune system, such defense machines would fight a variety of dangerous replicators, decreasing in scale as our threats do.
Second, we need to prevent active shield deployments from creating the conditions for power suicide. The rules that govern it need not only stop those that they monitor from engaging in violence but also keep the monitoring components in check. For that, the system must be multipolar so that the different components monitor each other, and if one component goes rogue, the rest of the system must be able to gather enough force to counter the component that went rogue. Instead of having a mutually trusting system of active shields, we need a mutually suspicious system of active shields. This is a resilient way of building an overall system that creates a stable enforcement of the rule of law without assuming that any one component of the system is incorruptible.
We should be aghast at the degree of surveillance and enforcement considered here and must do what we can to fight it. But if we can’t stop it, we must face the terrifying choice of what type of system it is and how to control it. Let’s look at three features a multipolar active shield should have:
Monitor: Encrypt Sousveillance
Successfully monitoring for hostile technology activity requires an unprecedented level of surveillance. On the one hand, we are lucky that robots, nukes, biotech, and nanotech (in contrast to cyber threats and AI which we tackle in the next chapter) involve physical aspects, so there is something physically observable. On the other hand, the physical processes we need to monitor to distinguish dangerous biotech or nanotech uses from benign ones are extraordinarily small. This suggests the need for almost unimaginable surveillance levels.
Compare this to our most recent precedent, nukes. While their current situation is still very concerning, no one has used them in battle since World War II. This is partly due to luck and partly due to non-proliferation treaties backed by monitoring regimes. We are fortunate that nuclear weapons are a very monitorable gross physical phenomenon. The average person has no need to privately traffic in Uranium-238 or engage in other activities with strong nuclear signatures. Likewise with nuclear tests, people don't have important private reasons to engage in activities that look like a nuclear test. Monitoring for nuclear explosions is not very intrusive to people's private lives.
The physical objects monitored to detect hostile nuclear weapons activity are large-scale and easy to verify. Particularly when compared to what is needed to monitor for future offensive biotechnology and nanotechnology use. Those involve molecular machines, so require verification at the molecular level: a daunting challenge. To prevent these hostile attacks, we need to monitor activities in small-scale labs that do small-scale manipulation of widely deployed synthesis mechanisms that are otherwise general-purpose. Given that the amount of weaponry one can make in a small building will be substantial, we may not be able to preserve our homes as privacy fortresses.
Deploying such pervasive physical monitoring without strong privacy safeguards could create dangers in excess of what we want to avoid. It could lead to a Big Brother-style top-down pervasive surveillance state that makes 1984 pale in comparison. There are other options. David Brin discusses a theoretical alternative by which top-down surveillance is kept in check by bottom-up sousveillance.24 He suggests that given future tech-enabled affordable mosquito-size cameras, the only way to prevent pervasive surveillance is via pervasive sousveillance, or upward-looking monitoring.
In this scenario, all information is simply public, so that for all physical activity in the public realm, all of us have complete access to all of it. The information produced by pervasive sousveillance lets different entities keep each other in check, thereby stopping corruption. Sousveillance could hold abuses in check but would be destructive of privacy. Even if we could adopt our privacy norms to match such a transparent society, it’s not clear that we should. Even if everyone can, in theory, see everyone else equally, the costs of analyzing information are asymmetric. Apart from extortability of individuals, the bigger danger is the power advantage this information confers long-term that can destabilize the balance.
A preferred monitoring option is an automatic network of artificial monitoring agents, from which information is not revealed to humans unless pre-agreed criteria are triggered. Ben Garfinkel uses the example of bomb-sniffing dogs, which report only if a given bag holds explosives. He suggests there is no reason why monitoring security-relevant information requires the system to learn anything else. While tricky to implement, an encrypted sousveillance fabric—that only releases information when it detects anomalies—could avoid top-down surveillance abuses and the loss of privacy from transparent sousveillance. Such a system, while hard to construct, could be the beginning of a path to safety.
From ThinThread to Encrypted Sousveillance
An early experiment with encrypted surveillance originates with William Binney, then senior official at the NSA. The NSA collects and stores unencrypted data, but in theory limits the access of individual analysts to this information. An analyst, for example, may be allowed to make only a limited set of queries to the database and only view the portion of records that are classified as matching these queries. From the Snowden revelations we know how little self-restraint was actually practiced by the agencies. Congress set up a set of laws that the NSA was supposed to obey but the internal judicial procedure became a rubber stamp. Almost all of the requests for information that the NSA took to the FISA Court were approved, amounting to a window-dressing of checks-and-balances.
During the crucial period of the emergence of this surveillance capacity, William Binney tried to construct an internal automated system with some degree of built-in governance. Thinthread was a prototype of an encrypted monitoring system that would only allow data on individuals to be decrypted if a judge found probable cause to believe the target was connected with serious crime. Such internal controls could have enabled a mutually monitoring human system, aided by FISA Courts and oversight by Congress. It would have still been corruptible without democratic accountability but it would have been a first step toward a system whose automated governance regime prevents uncontrolled access to surveillance information. Unfortunately, the program was canceled and eventually replaced by a system without the filtering and encryption protections.
Given recent progress in AI and cryptography, it is time to make another attempt at making monitoring privacy-preserving. In theory, if a surveillance task can be automated, then it can be done in a way that avoids requiring the party to collect the data in unencrypted form. There are existing facial recognition system that use homomorphic encryption to report only whether an image contains the face of a suspect. From there, it is not far to imagine future systems that report the identities of individuals only if they detect illegal activity with high probability. This comes closer to establishing a tripwire system in which encrypted information only gets revealed if it triggers certain pre-agreed identifiers. Using zero-knowledge proofs, it may be possible for the monitoring fabric to prove that its information gathering satisfies some agreed-upon tripwire criteria without revealing anything that it is not supposed to reveal.
The hard part will be designing a multipolar monitoring system in which different agents reliably keep each other in check. Illegitimate release of information is not visible abuse that the other monitoring actors can detect. In the case of physical coercion, other watchers could see the involuntary interaction performed by bad watchers and treat them as an attacker. The case of illegal release of information is harder to monitor because it has to be done by internal inspection.
Any privacy-preserving system is non-transparent by virtue of the fact that it is observing information that it is not revealing. If it is non-transparent, it is hard to reliably prove that it is not revealing the information in a way that is abusable. Such double non-transparency is a high bar, but it is needed if we want to trust that our information is neither publicly nor privately revealed.
Fortunately, zero-knowledge proofs, secure multiparty computation, homomorphic encryption, differential privacy and other encryption and privacy efforts are progressing rapidly. More funding is needed to speed up better tool development. The temptation to corrupt monitoring fabrics is going to be enormous. We must avoid that the fear of the dangers and the promise of privacy preservation lead us to lower our guard to the point that we allow abusable monitoring to get deployed. We will come back to this when discussing computer security in chapter 8.
Detect: Design Ahead
Let’s assume we successfully design an encrypted multipolar monitoring fabric. How do we define a dangerous activity to monitor for? Nick Bostrom compares the process of civilization engaged in technological discovery to pulling balls out of an urn. He suggests that we need a system that ensures we don’t pull out any “black balls”, i.e. technologies that cause disaster instead of progress. What if we pull out a ball that kind of looks gray to us, while others see it as more silver? As we start playing with it, it may get dirty and turn darker and darker until someone speaks up and calls it a black ball.
When someone discovers such a civilization-destroying potential of a technology, we need an open system in place that is effective at problem-solving around the threat. If we have the complex superintelligence of civilization, with a strong interest in preserving the peaceful decentralized fabric of cooperation, then we can bring this intelligence to the problem. We need to make our cooperative architectures more reliable, more accountable, and more widely reviewed, so more of us can know how much danger we are in, and give feedback.
As a precursor of such a system, let’s take inspiration from Robert Reid’s proposal for pandemic preparedness. He suggests that a global cooperative layer of scientists, equipped with local knowledge of the viral patterns in their area could create open source weather maps of dangerous pathogens.25 Our world’s complex adaptive systems have millions of trade-offs on millions of constantly changing margins. We must resist the temptation of creating centralized organizations tasked with solving the problem. This reduces the intelligence that gets to be applied to that problem. If complex systems lose their adaptiveness to high-level planning, civilization loses its ability to adapt.
The Oracle Problem
This is especially pressing when we increasingly rely on automated monitoring systems. They turn a question about the real world into a decision procedure about electronically judgeable evidence that supposedly represents a claim about the real world. When all that these systems can draw on is evidence brought to them by sensors in the real-world, how can we trust the outcome? The Oracle Problem is especially pressing in situations in which not all sensors can be assumed as being well-meaning.
Fortunately, it only becomes a pressing problem once we have already solved all other problems. The Oracle Problem has become a concern in the blockchain world only because the ecosystem managed to advance to a point where this problem arises. This is a tremendous achievement, and experimentation with solutions in the blockchain space should inform our thinking moving ahead.26 Noting the problem means that we can get more minds working on it.
Fortunately, we have ample opportunity to learn along the way. Some technologies may allow for a window in which it is possible to design ahead and simulate them well before actual construction is feasible. For instance, advanced nanotechnology can and likely will be simulated well before it can be implemented. This time gap between knowing what is buildable and being able to build it creates room to increase safety. In addition, biotechnology is sometimes classified as the biological subset of the category of molecular machine systems referred to as nanotechnology. This means that in addressing near-term biotechnology dangers we will pick up strategies applicable to the longer-term challenges of nanotechnology. It’s not too early to start.
Defend: Open Arms
Let’s imagine we succeed at creating an automated multipolar monitoring system that can detect dangerous activity while preserving privacy. Next we need to design the enforcement mechanism that is activated if illegal activity is detected. Automated contracts themselves cannot intervene directly in physical reality. This is problematic because an enforcement mechanism has to be based in physical reality, for instance via robotics. If we think of monitoring as the sensory side of smart contracts, we need to combine it with the motor side of smart contracts that can turn decisions into engaging with the world.
Similar to the monitoring fabric, the logic of the enforcement fabric must be designed as an open source system. Knowledge transfer is in everyone’s interest, similar to when the U.S. unconditionally offered other nuclear parties its inventions for protecting against rogue launch. If any one player can prevent a false launch, it is better for everyone. Once we agree on the rules by which the system operates, and have simulated and tested them, we should strive for a simultaneous multipolar deployment by all parties. If any one portion of the active shield makes use of its enforcement mechanism illegitimately, the rest of the fabric can use its aggregate power to shut down the part of the network that operated illegitimately.
The closest current analog to the kind of open source innovation required, termed Open Arms, is perhaps large multi-way consensus mechanisms on the blockchain. A large chain, such as Ethereum, faces the problem of having to design a single set of incorruptible rules to be enforced in a multipolar manner. Even though all of the participants in the mechanism are corruptible, there is a mutual checking of each other through the blockchain replication mechanism that should make the system incorruptible.27
Drones with Body Cams
What would such a system look like in practice? While automated enforcement is still far in the future, we can already make decisions about the predecessors to these technologies. One such decision is demanding the analog of body cams in emerging enforcement machines. Body cams result in making corrupt behavior harder to hide and thus less likely to occur and making proper behavior that has an ugly outcome easier to defend. They both hurt bad cops and help good cops.
Every automated system with the capacity to kill people, including existing robotics such as drones, should have a body cam built into it. Such a black box recorder would contain all footage of what happened leading up to a fatality. If drones are acting lawfully, there is little ground to resist a time-delayed revelation of the information in the black box, i.e. the footage becomes public after a proper time window. Twenty years later the footage will not be significant to intelligence operations. Closing the monitoring feedback loop to hold bad actors accountable may require less than twenty years, but it is a lot better than never. If we can get twenty years agreed upon now, in ten years we may be able to get five years. If we can get something accepted that embodies the principle, then even with a time delay that is painfully wrong, we can start negotiating.
Entities choosing to be transparent changes the game completely. Most of game theory assumes that each entity is opaque inside, but the evolution of human cooperation leverages the difficulty people have in lying. With transparent entities, it’s no longer a question of what they would decide is in their interest, it’s a question of what they can decide given how they are constructed. Open source design and construction can define this. This applies to active shields and will apply to future artificially intelligent agents.
Navigate the Traps: Avoid First Strike Instabilities
Let’s imagine we succeed at designing an encrypted multipolar monitoring system that could detect attacks and enforce appropriate responses. To work in practice, it also must be deployed without causing a first-strike instability.
Given the uncertainties in a potential conflict, all parties have much to gain from simultaneous multilateral deployment of a mutual defense system. But, even in a close to best-case scenario—a system, if deployed, monitors for offensive use and takes action to prevent that use—the danger remains that one side might deploy before its competitors. Unfortunately, the technological designs resulting from sophisticated design-ahead also create a first-strike instability. This results in a first-strike instability, such that even if no party wants to start a conflict, the fear that another party might incentivizes a first strike.28 We have no simple answer to this problem.
The goal is to move towards a strong framework of norms of voluntary interaction with a highly multipolar set of interests. Nevertheless, this world must be emerging out of a world that does not have that high degree of decentralization yet. We live in a world that is militarily dominated by the United States. China and its military power are rising. Any transition to a balanced multipolar world order would require them to give up considerable military power.
If either of them had the ability to deploy a privacy-respecting monitoring fabric, it would also have the ability to deploy a fabric that is as powerful but not constrained by the safety protocols. Previously, the NSA had an ability to deploy the ThinThread fabric that obeyed privacy-preserving protocols. This was the active project that was being pursued but killed in favor of the more intrusive fabric. If we can answer the question how we can get a global credibly self-constraining monitoring fabric despite the preference of today’s military powers, we are in good shape to worry about enforcement.
We admit to the great fear that an active shield is itself a permanent military takeover. The attempt to build it, deploy it, and get it entrenched can go very badly but so will civilization if we don’t try. If we try we have a chance of succeeding, but if we don’t try we end up with a system that does not even presume to have minimal internal controls. We must try to build something whose stable point is a neutral framework of rules where judgment is distributed.
Compensate: Mutual Defense, Commerce, and Science
The more decentralized the systems we build now, the more likely future systems will follow this trend. The most obvious compensating dynamic is military mutual defense pacts. Turkey, Germany, and Italy have a nuclear sharing system or mutual defense pact that can be seen as a model for the multipolar framework of an active shield system. In mutual defense pacts of multiple separately deployed active shield subsystems, when one party misbehaves, the pact among the others can cooperate to restore the balance.
International trade is another means to push back the evolution of systems of force in favor of cooperation. If nation states are large circles, companies are smaller circles within them, and multinational companies are circles cutting across them. A multinational is still incorporated in a particular country that can corrupt it. But governmental behavior emerges from interest groups. Many interest groups benefit from trade. As trade with the rest of the world increases, it is increasingly in their interest to pressure their governments not to pursue external military dominance. During the British hostilities with their American colonies, British merchants, hurt by trade losses, were amongst the strongest proponents of peace. Since large militaries are not actively engaged in taking commercial actions, the danger of their dominance is not yet recognized enough. We hope to sensitize you that it is in your short-term interests to engage in multilateral paths in a mutually observant way that avoids centralized military force.
Sometimes, sets of citizens across nations can form explicit voluntary bonds to compensate for international power escalation. The Pugwash Conference was initiated by scientists, through the Einstein-Russell Manifesto, calling for a conference to assess the dangers of mass destruction. US, Soviet, and other scientists continued to meet during the Cold War, through the Cuban Missile Crisis, and the Vietnam War to draft background work for the Non-Proliferation Treaty, and the Biological and Chemical Weapons Conventions. Similarly, the Asilomar Conference was based on the realization of genetic engineering dangers. It resulted in scientists voluntarily deciding to halt experiments using recombinant DNA. This was a voluntary agreement to coordinate to not create a danger no one wanted to create. If 99% of us stay within voluntary safety controls, then even if 1% starts going off the rails, we can rely on the superintelligence of the rest of civilization to help.
If we can increase our multipolarity while future technological realities are emerging, we have a chance of having arrived at a world that is no longer takeoverable because there are enough capable forces that don’t want to be taken over. If one power commits suicide, its place and resources can be taken by other powers who have not committed suicide. There is no guarantee for stability. In the Peace of Westphalia, dozens of military parties maintained their strategic balance in an antifragile way. The Peace of Westphalia was multipolar for centuries, but it eventually collapsed. Likewise, the Founding Fathers were not confident that their arrangement would not collapse into a dictatorship. They were in a terrifying situation and their only choice was to do the best they could. Today their division of power is not functioning as well as originally designed, but it has lasted for centuries, and it is difficult not to consider that a success.
The constitution worked well enough that the intelligence of our civilizations can iterate on the issue of preserving the balance. Now, we are in the terrifying situation of witnessing military proliferation powered by strong technology. Our only choice is to do the best that we can. If we set up a sufficiently multipolar world prior to the emergence of much greater intelligences, we may be able to defer part of the power balancing problem to them. We need to get to the point where it's our descendants' problem, but where we have enabled them to address it because we didn't hand them a dictatorship.
Chapter Summary
We discussed the dark side of our rapidly maturing civilization: risks that come from automating technologies multiplying individuals destructive ability. Tempting solutions, such as strong central actors that monitor and control attacks, have their own risk of power suicide which may be worse than what they prevent.
Less obvious solutions are gradually emerging, thanks to innovations in cryptography. A multipolar active shield would rely on automated encrypted sensor sousveillance of security-relevant activity. It would only disclose its detection when pre-agreed tripwires suggesting illegal activity are triggered. If the shield detects such activity, its robotic enforcement arm prevents the rogue node from engaging in hostile actions. The shield’s multipolar deployment, by which many nodes watch both relevant activity and each other, avoids allowing dangerous activity outside and within the fabric to reach a threatening level.
If we can design and deploy such a system, we may feed two birds with one scone; we take the sting out of the inevitable automatic surveillance and enforcement, and use it to prevent other dangers. While automated sousveillance and enforcement seems like an impossibly far away dystopia, the future will grow out of today’s decisions. The better the structures we put in place soon, the less of a dystopia it will be.
Curious for more? Listen to this Intelligent Cooperation seminar.
Next up: DEFEND AGAINST CYBER THREATS | Computer Security
The Precipice by Toby Ord.
Benefits & Risks of Biotechnology by Future of Life Institute.
Vulnerable World Hypothesis by Nick Bostrom.
DoomsDay Machine by Daniel Ellsberg.
These two individuals were recently honored with the Future of Life Unsung Hero Award.
The Fragile World Hypothesis by David Manheim.
According to The DoomsDay Clock, a symbol representing the likelihood of a man-made global catastrophe, maintained since 1947 by the members of the Bulletin of the Atomic Scientists, we are closer to midnight than ever before. From 3 minutes to midnight when the Soviets kicked off the nuclear arms race by testing the first nuclear weapon in 1949, we moved to 100 seconds to midnight in 2020.
The video Why We Should Ban Lethal Autonomous Weapons is a great way to recalibrate one’s fear.
Do You Love Me? by Boston Dynamics.
Biological Agents on Wikipedia.
How to Deal with North Korea by Mark Bowden.
There’s Plenty of Room at the Bottom by Richard Feynman.
Nanosolutions for the 21 Century by Eric Drexler and Dennis Pamlin.
Towards Post-industrial Manufacturing by Eric Drexler.
Why World Government Risks Collective Suicide by Robin Hanson.
The Great Filter by Robin Hanson.
National Security Letters Are Unconstitutional, Federal Judge Rules press release by EFF.
Contrast this with Lavabit, the open-source encrypted webmail service that Snowden used to communicate with human rights lawyers. It shut down its services in response to what is believed to be a United States government order to reveal or grant access to information. It later re-launched with an improved privacy environment.
Lethal Autonomous Weapons Exist; They Must Be Banned by Stuart Russell et al.
Engines of Creation by Eric Drexler.
Transparent Society by David Brin.
Engineering the Apocalypse by Rob Reid and Sam Harris.
For instance, the prediction market Augur uses reputation tokens to pay its community to serve as an Oracle that correctly resolves predictions. Thus far, this worked well even for resolving contentious situations such as Vitalik Buterin’s bet on Trump losing the 2020 elections. Chainlink is a decentralized oracle project that seeks to provide accurate data, for instance on DeFi prices, via flexibility across multiple nodes so that if one node fails to report the correct price, the other nodes can nevertheless form a consensus on prices. Looking ahead, if verifiers increasingly rely on contradicting information sources, arriving at truth-tracking Oracles may become more challenging. Even defining what counts as evidence such that it is still relevant in a future that is very different from the present becomes challenging.
In chapter 6, we saw that given the voluntarism of the world in which systems like Ethereum were built, we could in theory have much greater diversity with smaller scale decisions because we don’t need that kind of centralized decision about what the rules are. However, the nature of the risks of bad physical enforcement mechanisms, i.e. robots engaging in violence, is so dangerous that if we do achieve something that is even close to as incorruptible as Ethereum for our governance problems, we should be extremely proud.
Better Angels of our Nature by Steven Pinker.