10. ITERATE THE GAME | Racing Where?
Previous chapter: WELCOME NEW PLAYERS | Artificial Intelligences
Different players value things differently. The game of civilization emerges as the composition of these valuations. The strategy of voluntary cooperation serves a great deal of them. If we get better at the game, what awaits in future rounds of play? Hanson suggests that:
“... in the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures. Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 103000, which seems impossible to achieve with only the 1070 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels.”
Imagine a Malthusian future, in which, civilization, left to emergent phenomena, leads to a race to subsistence. Much of our planet’s history from bacteria to civilization occurred at subsistence levels. If being far from subsistence is exceptional, we should not be looking forward to an imaginary future where most activity is far from it. More efficient activity outcompetes less efficient activity, so more of the overall activity may be efficient.We are racing toward competitive equilibria, which without any regulation, amounts to subsistence.
Watch Robert Hanson’s Simple Model of Grabby Aliens.
Competitive Equilibria: Subsistence without Suffering
Should we be worried? Only if subsistence means suffering. We find subsistence intuitively repugnant because our past history causes us to equate subsistence with suffering. Given the increasing automatability of manual tasks, future activity will not involve back-breaking physical activity but rather knowledge-based work. Does knowledge work have to entail suffering? Let's contrast two options:
One is that the kind of dominant computational work required to sustain our future does not require cognition. Instead, cognition is just a distraction from the computational machinery. In that case, our subsistence activity will not be cognitive activity. There is simply no suffering because there is nothing to experience it. Existing bacteria have at least a hundred times the mass of all human beings. Insofar as their activity is at subsistence, most activity is already at subsistence. And we are not worried by it.
The other option is that to be an efficient knowledge worker, you need cognition. In this case, the idea that suffering knowledge workers have an efficiency advantage over happy knowledge workers contradicts everything we know about knowledge work. Hanson lays out a future of human-brain emulations who do much work at subsistence, where subsistence is not suffering but rather involves living rich lives in VR.
This scenario is still rather conservative in assuming current human cognition as a constraint. It could be that non-human cognition will dominate human cognition, and that altered human cognition will dominate unaltered human cognition. Alteration could entail transferring a precisely literal human cognition into a VR environment. But one could equally imagine cognition that finds activities it engages in fulfilling because they are useful.
Our concern about subsistence for future non-human intelligences is well-intentioned. But our intuitions about subsistence are about creatures suffering. What pushes most activity toward subsistence is the evolutionary logic that whatever activity uses resources more efficiently becomes most of the activity to subsist on. If cognition doesn’t most effectively use resources, it may not become the dominant activity. If the dominant activity is cognitive, there is nothing about suffering that makes it a more efficient resource user. In neither scenario must there be suffering at subsistence.
Pick Pockets Away from Subsistence
There will always be pockets away from subsistence. If humans enter into the period of rapid future growth, some of us will choose to expand to subsistence in order to produce more output. Others will choose to remain within the bubble of surplus rather than growing at the margins. Those who grow as fast as possible will have descendants constituting more of the overall aggregate activity. Most of those descendants will return to being at subsistence. But the scale of the pockets of surplus can be magnitudes larger than our entire world, even if they are a minority of the universe.
Subsistence is not necessarily bad. Overall activity is itself a kind of wealth. So more overall cognition is a kind of wealth, just like having surplus is a kind of wealth. Which kind of wealth we think is a better trajectory for our future goes back to what we value.
There is no objective determination. A system of voluntarism gives everyone who enters into that rapid growth period a good place from which to choose the path they value. Entities at subsistence and those not at subsistence alike may find they benefit from upholding a system allowing them to be independent from each other or to cooperate to achieve their goals. A vast universe of billions of cognitive creatures (or, to avoid speaking of discrete creatures, a billion times more overall cognition) is possible in which most cognition is at subsistence. This still makes everything that we can experience with our current selves pale in comparison.
A Descriptive vs. Prescriptive Attitude to the Future
It is dangerous to overestimate our knowledge or underestimate our ignorance. We might know something about the physics of the rest of the universe via astronomy and cosmology as well as reasoning about computational limits. But the utility of resources when deployed by future intelligences that are incomprehensible to us is itself incomprehensible to us. We are ignorant of their needs and wants. Does this mean we should take a rather descriptive stance to the future?
The framework that created our current cooperative architectures emerged in a spontaneous and decentralized way. From the potential for violence and from our engaging in violence with each other, we saw the emergence of an increasingly voluntary society. To uphold voluntarism, we suggested a cryptocommerce architecture, a physical enforcement mechanism, and a property rights regime. These architectures deviate from the spontaneous order perspective we started this book with.
Perhaps the evolution of voluntary interaction frameworks is itself something that we should trust future intelligences, human or not, to figure out for themselves? Insofar as this book attempts to provide an alternative to locked in futures, are we making the same mistake by promoting specific architectures?
Deferring action to future generations could be preferable under the assumption that they have a choice in the matter. With automation as the main driver of violence, the destructive potential for violent negative sum tragedies has grown tremendously. Computer insecurities make our civilization’s very foundations vulnerable. Artificial intelligence risks winner-take-all scenarios with one player dominating everything else. Soon, those who can may race to consume the universe. Even the prospect of these scenarios creates first-strike instabilities to destroy potential competitors.
We need to act if we want future generations to be able to make any choices at all. Recognizing the dangers, we may arrive at a negotiated solution that more resembles our existing massively multi party civilization agreement. Whatever we do, we do within a game left to us by prior generations. Even if we do “nothing”, we endow them with a strategic set of relationships with payoffs and the potential for players to make violent and nonviolent moves. We have come full circle to the start of this book; we can’t exempt ourselves from creating the game within which future players decide.
There is no reason to think that the game for future generations will be a better one if we do not try to influence what it will be. There is reason to believe that by trying to do a good job, we can leave them with a game that, when iterated, results in a better situation than if we had not tried. We are actually much better off because our ancestors succeeded at imposing a game on us. The US Founding Fathers set up a game that, when iterated, resulted in a world in which we are leading better lives than if they had not tried. There is much they could not and did not anticipate, but nevertheless they got some fundamental principles right. We can and should work to determine and implement what it would take to leave the next iteration with a better game.
Future Generations’ Seats at the Table
What hope is there that the future interests of vastly greater intelligences will uphold our negotiated arrangements?
On the surface, future generations do not have seats at the negotiation table. Whatever current players can come to agreement on becomes the initial game state inherited by future generations. But future players do have a seat in that we of the present care about their interests. It is not just that we want them to get more of what they want. We also understand that strategic instabilities can lead to non-voluntary interaction in order to bring about a different game. Given current weapons, this could instantaneously eliminate entities whose continued existence we value. We want to avoid sufficiently many future players having enough regrets about the game that they believe their best interests are served by violently overthrowing it.
We need to make arrangements good enough that using them has greater expected value than taking a chance at overthrowing them - and ideally, so that they are immensely better off than they would be without the system. If future generations can most effectively pursue their goals by upholding our endowed arrangements, they will keep using them as Schelling Points.
Values & Voluntarism in Future Games
We should approach future intelligences that will make up most of the universe’s cognition without making assumptions beyond very general universal principles, such as their making choices in the service of their goals. Within this constraint, the best we can do to enable future entities to solve their problems is to set up architectures for voluntary cooperation. But ultimately, future intelligences will design their own cooperative arrangements. These should not be bottlenecked by human designers.
A rich variety of games, interactions and arrangements will be played simultaneously in many different ways. Some will end up stuck in traps that players cannot figure out how to escape. Given enough complexity and diversity, those that grow and build wealth won’t get stuck. The ones that do just become a smaller and smaller fraction of the overall system. The system’s growing wealth, complexity, and cognition emerge from the games that didn't get stuck. Having seen voluntarism emerge without planning, across very different systems, from software architectures to institutions, gives us reason to believe a similar future is at least possible. But future intelligences will also engage in ever richer incremental design.
Stable voluntary boundaries across entities are fundamental to cooperative interaction in networks of entities making requests of other entities. Because voluntary boundaries enable independent innovation on both sides of the boundary, our descendants might very well invent other coordination points. In the voluntary Paretotropian framework of “I value what I value, you value what you value, let's cooperate”, we choose an arrangement that sets initial conditions, leaving the outcome adaptive to future knowledge.
Even what we mean by “voluntarism” is not written in stone, but emerges from negotiation. Voluntarism itself doesn't give us a framework of rights but we define it such that whatever rights framework we develop in order to coordinate is the framework for the emergent extension of voluntarism. For instance, voluntarism with regard to our corporeal bodies has become non-negotiable. But, future negotiations of space resource property rights, for instance, may extend the notion of voluntarism into other resources with no single unambiguous path ahead.
With nothing less than our future civilization as the outcome of the games we set up, it seems incredibly important to get the initial conditions right. Or does it? Our norms emerged from iterated games shaped by initial conditions. The game we inherited determined the vantage point from which we design the next moves. Whatever constraints we now put in place will give rise to strategies that will grow into the norms and values of future generations.
If there is no position outside of the game from which to evaluate the game, is it all relative? Not necessarily. We can still point to a vector that sets a trajectory through a very complicated space. To the extent that we succeed in thinking through our next move, we believe that choosing our next actions along a planned trajectory will have a better than random correlation with norms that emerge in the universe descendant from those choices.
If we simply valued minimizing suffering, we could set up a future that succeeds at doing so, for instance by going extinct. If we value growth of cognition, creativity, and adaptive complexity, there are different, more complicated choices to make. In this book, we suggested that intelligent voluntary cooperation is a good heuristic for choosing amongst this set of choices and proposed a few moves for the next game iterations.
Check out this seminar on how game theory might apply to galactic and universe scale civilizations.
We have reason to believe that setting up the game as we have discussed in this book brings a better future than if we don’t try. We uphold a system that enables increasingly valuable arrangements by making sure all parties have a stake in the game. We can do this by continuing to improve our system of voluntary cooperation to include other sentient, artificial, and alien intelligences as they are encountered or developed.
Nobody can tell from our current positions on the board where this game will ultimately end. This is a feature; after all, why play if you know the outcome? What we can do is set up the board so our descendants and our future selves can discover these wonders for themselves.
In The Future of Human Evolution, Bostrom extrapolates this efficiency mandate to a world of human mind uploads which outsource most tasks to others:“Why do I need to bother with making decisions about my personal life when there are certified executive-modules that can scan my goal structure and manage my assets so as best to fulfill my goals?” Some uploads who choose to retain most of their functionality and handle tasks themselves would be comparable to hobbyists who enjoy growing their vegetables, but due to lacking efficiency may eventually also get outcompeted over time. Zack Davis terrifyingly explores such a human brain emulation world in The Contract drafting Em.
For instance, in Letter from Utopia, Bostrom envisions a future mind looking back at our current selves, encouraging us to bring it into existence by describing its experience: “My mind is wide and deep. I have read all your libraries, in the blink of an eye. I have experienced human life in many forms and places. [...] Does the whole exceed the sum of the parts or do the parts exceed the whole? What I have is not more of what you have. It’s not only the particular things, the paintings and toothpaste-tube designs, the book covers, the epochs, the loves, the rusted leaves, the rivers, and the random encounters, the satellite photos, and the hadron collider data streams. It is also the complex relationships between these particulars. There are ideas that can be formed only on top of such a wide experience base, and there are depths that can only be plumbed with such ideas. And the games. And the lusty things, and the things I can’t even mention. You could say I am happy, that I feel good. That I feel surpassing bliss and delight. Yes, but these are words to describe human experience. They are like arrows shot at the moon. What I feel is as far beyond feelings as what I think is beyond thoughts.””