1. FOREWORD | What's at Stake in This Game?
Previous chapter: START HERE: The Book
Civilization: A Superintelligence Aligned with Human Interests
Consider civilization as a problem-solving superintelligence.
The graph below shows the global decline in extreme poverty from 1820 to 2015, prompting Steven Pinker’s quote,
“We have been doing something right, and it would be nice to know what, exactly, it is.” 1
After 1980, the rate of decline increases dramatically and continues.2 It would be good to know the dynamics that culminated in this dramatic decline that continued for the next 40 years. What did we get right? What did civilization learn?
It’s not possible to answer this directly, but we would like to open it up for discussion. We need a conceptual framework for thinking abstractly about civilization over long periods of time.
We like the game metaphor. Games are universal. Asking someone to imagine a game board gives an intuitive context for explaining the iterative behavior of multiple players and the consequent outcomes.
Chess has rules. Then play begins. The game is the properties that emerge from the interactions of separately-interested players in a framework of rules.
What changed from “nature red in tooth and claw” to our current civilization? As civilization has grown less violent, it is increasingly dominated by voluntary interactions. The nature of the rules in this less threatening world evolves in a context of willing participation. The emergent outcome of the game is more effective cooperation, creating better outcomes for everybody.
This is something we got right.
Technologies for Voluntary Cooperation
Can we game the future such that more of the world can know the power of voluntary cooperation? Can our future display new forms of cooperation that are not possible with our current technologies?
Luckily, our recent history includes an illustrative case: the timeline of the deployment of modern cryptography.
Cryptography has a long fascinating history, but here we follow a particular thread beginning in August 1977 with the publication of Martin Gardner’s “Mathematical Games” column in Scientific American. Gardner wrote about a ground-breaking discovery: The RSA (Rivest–Shamir–Adleman) public key algorithm.
Mark Miller (co-author) has a personal history in the battles over cryptography. In 1976, Mark apprenticed himself to Ted Nelson, creator of the Xanadu hypertext system. They shared a vision of a global hypertext publishing system as a censorship-free liberating force. But they knew that, without the right architecture, any such system would be corrupted into a tool of 1984-style oppression. Before the invention of modern cryptography, they could not solve this puzzle.
Mark was an avid reader of Scientific American and always went straight to Gardner’s column. After reading about the RSA discovery, he called Ted at an ungodly hour and joyously proclaimed, “Ted! We can prevent the Ministry of Truth!”
They immediately set out to get a copy of the RSA publication. In those days you could not simply download scientific publications. Instead, they mailed a request for a copy. And waited. Nothing arrived. The United States intelligence community had suppressed publication, to preserve their ability to spy on all conversations.
Mark decided to take action. His mission began with a trip to the MIT campus. As a 20-year-old computer geek he was welcome in the tribe and before long, he had the paper in hand. He knew it was critical to distribute it far and wide to counter the attempt at suppression. Fully aware of the risk, he gave copies to his most-trusted friends, telling them “If I disappear, make sure this gets out.”
While handling the paper only with gloves he started making copies, going to several different copy shops. He mailed copies anonymously to technology-minded hobbyist groups and magazines.
In 1978, U.S. intelligence dropped the attempt to suppress the RSA paper. Mark will never know if his personal actions made any difference. But this experience made the stakes clear. Technologies of freedom are worth fighting for, and building.
Mathematics and technology do not, on their own, lead to privacy of individuals in their digital lives.
Together with like-minded cypherpunks, the fighting and building continued. On one side, export controls, mandatory Clipper chip backdoors, and weak cyphers were approved for general use. On the other, Phil Zimmerman’s PGP, Matt Blaze’s hacking of Clipper, John Gilmore’s breaking of government approved cyphers, and the Electronic Frontier Foundation’s case overturning export controls as unconstitutional violations of free speech. Mostly, we won.
We can easily imagine an alternate history in which these fights had been lost. Our world would be much more totalitarian. In the analog era, all of our conversations could be spied upon. The digital era would have combined inescapable surveillance with modern computing, leading to an inversion of democracy: Those shielded by classified cryptography are unaccountable to us, while we, in every minute aspect of our lives, are accountable to them.
While the real world resembles this nightmare to an uncomfortable extent, the public growth of modern cryptography gives us the tools to fight back.
The HTTPS encryption protocol gives us secure transactions and communication – secure email, end-to-end encryption, and secure credit card transactions necessary for the growth of a Web economy. Human rights activists are less vulnerable due to secure messaging.
All over the world, corrupt powers destroy lives. They interfere with individuals’ plans to have a good life through voluntary trade with others. How can you plan with the uncertainty of arbitrary coercive interference? It’s hard for those in the rich world to appreciate the motivation and inventiveness of people in this situation. Modern cryptography, including Bitcoin, allows for a parallel economy that does not rely on the government for commerce and trade. Some are fighting for their lives and these tools give them a chance.
Blockchains cannot be corrupted. The decentralized cryptographically secured interactions are protected from coercive corruption by governments or criminals. Transactional history cannot be rewritten. Smart contracts give us a new technological base to create complex voluntary arrangements, to realize new forms of cooperation.
We are an information society. Starting with these hard-earned gains, we can build a solid foundation crucial to our future, where each layer is built on previous layers in the context of trust and secure cooperation. Although in its infancy, we expect the cryptocommerce space to be transformative.
Centralized Power
Centralized power — concentrated sources of authority and control — is incompatible with a voluntary, cooperative framework of civilization.
We’re not advocating for revolutionary takeover or government regulations to combat centralization. First, it’s often not clear what or how to regulate. Secondly, such tactics often backfire, creating more centralization as a consequence.
Let’s take a look at a couple of cases: bank regulations and Google’s Gmail.
The banking industry is susceptible to top-down manipulation. Operation Choke Point3 successfully shut down lawful businesses, denying them access to standard business services. This was not voluntary action by some banks, rather, all banks reacting to regulatory pressure. The targeted businesses had nowhere to turn.
Google’s Gmail is an interesting case. Before webmail, we had a world of decentralized email. There were several projects working to add crypto, which would have given us truly decentralized secure email. Instead, centralized webmail took over.
When first released in 2004 Gmail had 2 million users. Currently, it’s 1.5 billion. Google stores and has access to the plain text of users’ email. Why were people willing to entrust Google with all their correspondence? One reason was Google’s slogan at the time, “Don't be evil”, projecting an image of trustworthiness.
Over time, Google accumulated a centralized trove of the contents of email communication of over a billion people. The US federal government, unable to resist the temptation, issued national security letters (1) demanding the handover of private email and (2) prohibiting Google from telling anyone. Whether Google wanted to resist or comply is a separate issue; the point is that a centralized vulnerability inevitably led to its corruption.
There is a natural dynamic between centralizing and decentralizing forces. As in the Gmail case, at first, we often risk centralized vulnerability for convenience.
Centralized vulnerabilities create temptations to corruption that cannot be resisted. As the resulting corruption becomes apparent, they raise the competitive advantage of decentralized competitors. We seek to create incorruptible decentralized systems that can outcompete centralized systems.
By working within the constraints of voluntary competition, we are more protected from our own mistakes. If our dreams are misconceived, they are also less likely to outcompete. If, on the other hand, they are the genuine improvements that we think they are, they are also more likely to win these competitions in the long run.
The battle for decentralization can never have an ultimate decisive victory, but, in the absence of astute watchfulness, it can suffer an ultimate defeat.
Superintelligence
There are many views on AI dangers. One prominent perspective goes as follows: Once an AI exceeds human capacity, it can improve its own design much faster than human designers can. As it improves itself, it also improves its ability to improve itself, leading to an explosive chain reaction which can suddenly catapult this one breakthrough AI into a superintelligent capacity exceeding all the rest of human civilization combined.4
This is the “hard takeoff” scenario, leading to a “unipolar” outcome. It is a hard takeoff because it happens so suddenly that nothing else has a chance to adapt during the process. It is unipolar in the sense that this one superintelligent entity may be more powerful than everything else, and so in a position to rule everything else.
Starting from the notion that this unipolar takeover is inevitable—that we will necessarily be ruled by a permanent dictator of our own design— some conclude we should design a benevolent dictator: one that wants to serve our interests. This would raise two design questions: (1) how to construct the AI so that it wants to serve our interests? and (2) what are our interests anyway? The first question is hard, but the second question opens up philosophical problems that have eluded any general agreement over the last few millennia.
From our perspective, any best case scenario arising from this notion is a worst case scenario, the one we must prevent at all costs. Any unipolar takeover of the world is unlikely to be benevolent. Rather than hoping that this unprecedented power over the world will be shaped by “the right kinds of people”, history tells us that powerful positions attract those who want power.
Instead of a centrally designed formula encoding the general good, we currently have a diverse pluralistic world of many different people making their own choices about what they want and how to achieve it. People formulate their goals using their idiosyncratic personal knowledge within a great variety of cultural and philosophical systems. There may be no general good beyond the revealed preferences of each of us making choices to pursue our goals in our own way.
If the problem of superintelligence were a purely new problem with no historical precedent, there is little relevant to learn from history. But human institutions are already non-human intelligences with which we cooperate and against which we defend ourselves. Depending on the nature of the institution, they can be well aligned or badly aligned with human interests.
Until the last few centuries, most of our history was the history of tyranny. By contrast, with the invention of democracy, separation of powers, rule of law, due process, individual rights, and independent judiciaries, not only have we better aligned our institutions with our interests, we have also enabled the ecosystem of these institutions — our civilization as a whole — to rapidly grow in intelligence and benefit to its constituents.
The superintelligence of civilization is already emergent from the interplay of human and machine intelligences. Within our multipolar civilization, as machines get more intelligent, they will contribute more to the overall intelligence of our civilization.
We're already facing, and have faced now for over seven decades, the existential risk of nuclear war. We are in a multipolar world of multiple nations armed with nukes and willing to use them if they absolutely have to. Now that we're in that situation, our only options going forward are multipolar options. Anything that threatens a unipolar takeover risks provoking a nuclear war. So unipolar solutions may be off the table anyway.
We cannot survey the landscape of choices from an imagined position outside the game. One can imagine Karl Marx sitting at his desk, overlooking the factory floor below, appreciating the coordination and efficiency of the workers. It’s all right there in front of him. He cannot fathom why the economy outside needs more than this elemental structure.
We are in a different game. The people who can shape the game are in it. We’re in an iterated game. We are grateful that previous players set it up such that we can now play the next moves from inside the game. Whatever we do, the result determines what kind of game gets played in the future. We cannot not play the game.
But we can iterate and make sure the game emerges in a multipolar manner within the voluntary framework of civilization. This book explores technologies that can be useful on this path.
In the future, most cognition will be non-human. The game dynamics we start now must be good enough for human and non-human interests, so that these future players have an interest in upholding the voluntary nature of the game. That is our ultimate protection. If we can accomplish this difficult task, future players can unlock currently unimaginable levels of this beautiful game.
Next chapter: OVERVIEW | What to Expect From This Game
A History Of Violence by Steven Pinker.
Graph taken from World Population Living in Extreme Poverty from OurWorldinData. A similar graph also exists for what the world looks like with China excluded in Poverty Decline without China.
Superintelligence by Nick Bostrom.
Something I'd be curious to hear more about is how incorruptible decentralization, while would benefit the general population, could be co-opted by those with power to become more powerful. For example with cloud computing, one becomes immune more or less to attacks that were more effective against centralization bc of the ability to spin up more or less servers on demand or easily have infrastructure set up across different regions etc.