<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Foresight Institute: Gaming the Future: Technologies for Intelligent Voluntary Cooperation]]></title><description><![CDATA[A living book about technologies of intelligent voluntary cooperation.]]></description><link>https://foresightinstitute.substack.com/s/intelligent-voluntary-cooperation</link><generator>Substack</generator><lastBuildDate>Wed, 22 Apr 2026 00:02:59 GMT</lastBuildDate><atom:link href="https://foresightinstitute.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Foresight Institute]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[foresightinstitute@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[foresightinstitute@substack.com]]></itunes:email><itunes:name><![CDATA[Allison Duettmann]]></itunes:name></itunes:owner><itunes:author><![CDATA[Allison Duettmann]]></itunes:author><googleplay:owner><![CDATA[foresightinstitute@substack.com]]></googleplay:owner><googleplay:email><![CDATA[foresightinstitute@substack.com]]></googleplay:email><googleplay:author><![CDATA[Allison Duettmann]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[10. ITERATE THE GAME | Racing Where? ]]></title><description><![CDATA[Previous chapter: WELCOME NEW PLAYERS | Artificial Intelligences]]></description><link>https://foresightinstitute.substack.com/p/iterate-game</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/iterate-game</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:28:58 GMT</pubDate><enclosure url="https://cdn.substack.com/image/youtube/w_728,c_limit/jMouMl7RHk0" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous chapter: <a href="https://foresightinstitute.substack.com/p/new-players">WELCOME NEW PLAYERS | Artificial Intelligences</a></h3><p></p><p>Different players value things differently. The game of civilization emerges as the composition of these valuations. The strategy of voluntary cooperation serves a great deal of them. If we get better at the game, what awaits in future rounds of play? Hanson suggests that:</p><p><em>&#8220;... in the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures. Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 103000, which seems impossible to achieve with only the 1070 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster.&nbsp; So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels.&#8221;</em>&nbsp;<em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em>&nbsp;</p><p>Imagine a Malthusian future, in which, civilization, left to emergent phenomena, leads to a race to subsistence. Much of our planet&#8217;s history from bacteria to civilization occurred at subsistence levels. If being far from subsistence is exceptional, we should not be looking forward to an imaginary future where most activity is far from it. More efficient activity outcompetes less efficient activity, so more of the overall activity may be efficient.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> We are racing toward competitive equilibria, which without any regulation, amounts to subsistence.&nbsp;</p><div id="youtube2-0lKliaFllPA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;0lKliaFllPA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/0lKliaFllPA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watch Robert Hanson&#8217;s <a href="https://foresight.org/salon/robin-hanson-george-mason-university-a-simple-model-of-grabby-aliens/">Simple Model of Grabby Aliens</a>.</p><h2>Competitive Equilibria: Subsistence without Suffering</h2><p>Should we be worried? Only if subsistence means suffering. We find subsistence intuitively repugnant because our past history causes us to equate subsistence with suffering. Given the increasing automatability of manual tasks, future activity will not involve back-breaking physical activity but rather knowledge-based work. Does knowledge work have to entail suffering? Let's contrast two options:&nbsp;</p><p>One is that the kind of dominant computational work required to sustain our future does not require cognition. Instead, cognition is just a distraction from the computational machinery. In that case, our subsistence activity will not be cognitive activity. There is simply no suffering because there is nothing to experience it. Existing bacteria have at least a hundred times the mass of all human beings. Insofar as their activity is at subsistence, most activity is already at subsistence. And we are not worried by it.</p><p>The other option is that to be an efficient knowledge worker, you need cognition. In this case, the idea that suffering knowledge workers have an efficiency advantage over happy knowledge workers contradicts everything we know about knowledge work. Hanson lays out a future of human-brain emulations who do much work at subsistence, where subsistence is not suffering but rather involves living rich lives in VR.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>&nbsp;</p><p>This scenario is still rather conservative in assuming current human cognition as a constraint. It could be that non-human cognition will dominate human cognition, and that altered human cognition will dominate unaltered human cognition. Alteration could entail transferring a precisely literal human cognition into a VR environment. But one could equally imagine cognition that finds activities it engages in fulfilling because they are useful.&nbsp;</p><p>Our concern about subsistence for future non-human intelligences is well-intentioned. But our intuitions about subsistence are about creatures suffering. What pushes most activity toward subsistence is the evolutionary logic that whatever activity uses resources more efficiently becomes most of the activity to subsist on. If cognition doesn&#8217;t most effectively use resources, it may not become the dominant activity. If the dominant activity is cognitive, there is nothing about suffering that makes it a more efficient resource user.&nbsp; In neither scenario must there be suffering at subsistence. </p><h2>Pick Pockets Away from Subsistence</h2><p>There will always be pockets away from subsistence. If humans enter into the period of rapid future growth, some of us will choose to expand to subsistence in order to produce more output. Others will choose to remain within the bubble of surplus rather than growing at the margins. Those who grow as fast as possible will have descendants constituting more of the overall aggregate activity. Most of those descendants will return to being at subsistence. But the scale of the pockets of surplus can be magnitudes larger than our entire world, even if they are a minority of the universe.&nbsp;&nbsp;</p><p>Subsistence is not necessarily bad. Overall activity is itself a kind of wealth. So more overall cognition is a kind of wealth, just like having surplus is a kind of wealth. Which kind of wealth we think is a better trajectory for our future goes back to what we value.&nbsp;</p><p>There is no objective determination. A system of voluntarism gives everyone who enters into that rapid growth period a good place from which to choose the path they value. Entities at subsistence and those not at subsistence alike may find they benefit from upholding a system allowing them to be independent from each other or to cooperate to achieve their goals. A vast universe of billions of cognitive creatures (or, to avoid speaking of discrete creatures, a billion times more overall cognition) is possible in which most cognition is at subsistence. This still makes everything that we can experience with our current selves pale in comparison.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>&nbsp;</p><h1>A Descriptive vs. Prescriptive Attitude to the Future</h1><p>It is dangerous to overestimate our knowledge or underestimate our ignorance. We might know <em>something</em> about the physics of the rest of the universe via astronomy and cosmology as well as reasoning about computational limits. But the utility of resources when deployed by future intelligences that are incomprehensible to us is itself incomprehensible to us. We are ignorant of their needs and wants. Does this mean we should take a rather descriptive stance to the future?</p><p>The framework that created our current cooperative architectures emerged in a spontaneous and decentralized way. From the potential for violence and from our engaging in violence with each other, we saw the emergence of an increasingly voluntary society. To uphold voluntarism, we suggested a cryptocommerce architecture, a physical enforcement mechanism, and a property rights regime. These architectures deviate from the spontaneous order perspective we started this book with.&nbsp;</p><p>Perhaps the evolution of voluntary interaction frameworks is itself something that we should trust future intelligences, human or not, to figure out for themselves? Insofar as this book attempts to provide an alternative to locked in futures, are we making the same mistake by promoting specific architectures?&nbsp;</p><p>Deferring action to future generations could be preferable under the assumption that they have a choice in the matter. With automation as the main driver of violence, the destructive potential for violent negative sum tragedies has grown tremendously. Computer insecurities make our civilization&#8217;s very foundations vulnerable. Artificial intelligence risks winner-take-all scenarios with one player dominating everything else. Soon, those who can may race to consume the universe. Even the prospect of these scenarios creates first-strike instabilities to destroy potential competitors.&nbsp;</p><p>We need to act if we want future generations to be able to make any choices at all. Recognizing the dangers, we may arrive at a negotiated solution that more resembles our existing massively multi party civilization agreement. Whatever we do, we do within a game left to us by prior generations. Even if we do &#8220;nothing&#8221;, we endow them with a strategic set of relationships with payoffs and the potential for players to make violent and nonviolent moves. We have come full circle to the start of this book; we can&#8217;t exempt ourselves from creating the game within which future players decide.&nbsp;&nbsp;</p><p>There is no reason to think that the game for future generations will be a better one if we do not try to influence what it will be. There is reason to believe that by trying to do a good job, we can leave them with a game that, when iterated, results in a better situation than if we had not tried. We are actually much better off because our ancestors succeeded at imposing a game on us. The US Founding Fathers set up a game that, when iterated, resulted in a world in which we are leading better lives than if they had not tried. There is much they could not and did not anticipate, but nevertheless they got some fundamental principles right. We can and should work to determine and implement what it would take to leave the next iteration with a better game. </p><h2>Future Generations&#8217; Seats at the Table</h2><p>What hope is there that the future interests of vastly greater intelligences will uphold our negotiated arrangements?&nbsp;</p><p>On the surface, future generations do not have seats at the negotiation table. Whatever current players can come to agreement on becomes the initial game state inherited by future generations. But future players do have a seat in that we of the present care about their interests. It is not just that we want them to get more of what they want. We also understand that strategic instabilities can lead to non-voluntary interaction in order to bring about a different game. Given current weapons, this could instantaneously eliminate entities whose continued existence we value. We want to avoid sufficiently many future players having enough regrets about the game that they believe their best interests are served by violently overthrowing it.&nbsp;&nbsp;</p><p>We need to make arrangements good enough that using them has greater expected value than taking a chance at overthrowing them&nbsp; - and ideally, so that they are immensely better off than they would be without the system. If future generations can most effectively pursue their goals by upholding our endowed arrangements, they will keep using them as Schelling Points.</p><h1>Values &amp; Voluntarism in Future Games</h1><p>We should approach future intelligences that will make up most of the universe&#8217;s cognition without making assumptions beyond very general universal principles, such as their making choices in the service of their goals. Within this constraint, the best we can do to enable future entities to solve their problems is to set up architectures for voluntary cooperation. But ultimately, future intelligences will design their own cooperative arrangements. These should not be bottlenecked by human designers.&nbsp;</p><p>A rich variety of games, interactions and arrangements will be played simultaneously in many different ways. Some will end up stuck in traps that players cannot figure out how to escape. Given enough complexity and diversity, those that grow and build wealth won&#8217;t get stuck. The ones that do just become a smaller and smaller fraction of the overall system. The system&#8217;s growing wealth, complexity, and cognition emerge from the games that didn't get stuck. Having seen voluntarism emerge without planning, across very different systems, from software architectures to institutions, gives us reason to believe a similar future is at least possible. But future intelligences will also engage in ever richer incremental design.&nbsp;</p><p>Stable voluntary boundaries across entities are fundamental to cooperative interaction in networks of entities making requests of other entities. Because voluntary boundaries enable independent innovation on both sides of the boundary, our descendants might very well invent other coordination points. In the voluntary Paretotropian framework of &#8220;I value what I value, you value what you value, let's cooperate&#8221;, we choose an arrangement that sets initial conditions, leaving the outcome adaptive to future knowledge.</p><p>Even what we mean by &#8220;voluntarism&#8221; is not written in stone, but emerges from negotiation. Voluntarism itself doesn't give us a framework of rights but we define it such that whatever rights framework we develop in order to coordinate is the framework for the emergent extension of voluntarism. For instance, voluntarism with regard to our corporeal bodies has become non-negotiable. But, future negotiations of space resource property rights, for instance, may extend the notion of voluntarism into other resources with no single unambiguous path ahead.&nbsp;</p><p>With nothing less than our future civilization as the outcome of the games we set up, it seems incredibly important to get the initial conditions right. Or does it? Our norms emerged from iterated games shaped by initial conditions. The game we inherited determined the vantage point from which we design the next moves.&nbsp; Whatever constraints we now put in place will give rise to strategies that will grow into the norms and values of future generations.&nbsp;</p><p>If there is no position outside of the game from which to evaluate the game, is it all relative? Not necessarily. We can still point to a vector that sets a trajectory through a very complicated space. To the extent that we succeed in thinking through our next move, we believe that choosing our next actions along a planned trajectory will have a better than random correlation with norms that emerge in the universe descendant from those choices.&nbsp;</p><p>If we simply valued minimizing suffering, we could set up a future that succeeds at doing so, for instance by going extinct. If we value growth of cognition, creativity, and adaptive complexity, there are different, more complicated choices to make. In this book, we suggested that <em>intelligent voluntary cooperation</em> is a good heuristic for choosing amongst this set of choices and proposed a few moves for the next game iterations.</p><div id="youtube2-jMouMl7RHk0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;jMouMl7RHk0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/jMouMl7RHk0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Check out this <a href="https://foresight.org/salon/anders-sandberg-game-theory-of-cooperating-w-extraterrestrial-intelligence-future-civilizations/">seminar</a> on how game theory might apply to galactic and universe scale civilizations.</p><h3>Chapter Summary</h3><p>We have reason to believe that setting up the game as we have discussed in this book brings a better future than if we don&#8217;t try. We uphold a system that enables increasingly valuable arrangements by making sure all parties have a stake in the game. We can do this by continuing to improve our system of voluntary cooperation to include other sentient, artificial, and alien intelligences as they are encountered or developed.&nbsp;</p><p>Nobody can tell from our current positions on the board where this game will ultimately end. This is a feature; after all, why play if you know the outcome? What we can do is set up the board so our descendants and our future selves can discover these wonders for themselves.</p><p>END</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html">This is the Dream Time</a> by Robin Hanson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;In <a href="https://www.nickbostrom.com/fut/evolution.html">The Future of Human Evolution</a>, Bostrom extrapolates this efficiency mandate to a world of human mind uploads which outsource most tasks to others:<em>&#8220;Why do I need to bother with making decisions about my personal life when there are certified executive-modules that can scan my goal structure and manage my assets so as best to fulfill my goals?&#8221; </em>Some uploads who choose to retain most of their functionality and handle tasks themselves would be comparable to hobbyists who enjoy growing their vegetables, but due to lacking efficiency may eventually also get outcompeted over time. Zack Davis terrifyingly explores such a human brain emulation world in <a href="https://secularsolstice.github.io/Contract_Drafting_Em/gen/">The Contract drafting Em</a>.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://ageofem.com/">Age of Em</a> by Robin Hanson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>For instance, in <a href="https://nickbostrom.com/utopia.html">Letter from Utopia</a>, Bostrom envisions a future mind looking back at our current selves, encouraging us to bring it into existence by describing its experience: &#8220;<em>My mind is wide and deep. I have read all your libraries, in the blink of an eye. I have experienced human life in many forms and places. [...] Does the whole exceed the sum of the parts or do the parts exceed the whole? What I have is not more of what you have. It&#8217;s not only the particular things, the paintings and toothpaste-tube designs, the book covers, the epochs, the loves, the rusted leaves, the rivers, and the random encounters, the satellite photos, and the hadron collider data streams. It is also the complex relationships between these particulars. There are ideas that can be formed only on top of such a wide experience base, and there are depths that can only be plumbed with such ideas. And the games. And the lusty things, and the things I can&#8217;t even mention. You could say I am happy, that I feel good. That I feel surpassing bliss and delight. Yes, but these are words to describe human experience. They are like arrows shot at the moon. What I feel is as far beyond feelings as what I think is beyond thoughts.&#8221;&#8221;</em></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[9. WELCOME NEW PLAYERS | Artificial Intelligences ]]></title><description><![CDATA[Previous chapter: DEFEND AGAINST CYBER THREATS | Computer Security]]></description><link>https://foresightinstitute.substack.com/p/new-players</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/new-players</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:28:29 GMT</pubDate><enclosure url="https://cdn.substack.com/image/youtube/w_728,c_limit/sq6UKF8CwJ0" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous chapter: <a href="https://foresightinstitute.substack.com/p/defend-cyber">DEFEND AGAINST CYBER THREATS | Computer Security</a></h3><p>Voluntary cooperation is a main feature of the civilizational game. It got us to where we are today. We explored how to improve and defend this dynamic from within the game. Gradually, non-human intelligent players are entering the playing field. In a few more iterations, they will be major players. If we want to continue our Paretotropian ascent, we better make sure our cooperative framework is set up to handle a diversity of intelligences pursuing a diversity of goals.&nbsp;</p><h1>AI Threats</h1><p>Let&#8217;s pretend we have achieved computer security. Are we set for a cooperative long-term future? It is worth revisiting Toby Ord&#8217;s AI takeoff scenario. According to Ord, once an AI breaches computer security vulnerabilities, it could escalate its power.&nbsp;</p><p><em>&#8220;This is more speculative, but there are many plausible pathways: by taking over most of the world&#8217;s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity. Of course, no current AI systems can do any of these things. But the question we&#8217;re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be &#8220;&#8216;yes.&#8217;&#8221; </em></p><p>While &#8220;AGI&#8221; sometimes describes an <em>Artificial General Intelligence</em> of mere human-level intelligence, many assume an AGI reaching that level will eventually exceed it in most relevant intellectual tasks. An AGI that displaces human civilization as the overall framework of relevance for intelligence and dominates the world can be described as an <em>AGI singleton</em>. This scenario carries two threats worth unpacking; the first strike instabilities generated by a mere possibility of such a takeover, and the value alignment problems resulting from a successful takeover.&nbsp;</p><h3>AI First Strike Instabilities</h3><p>An AGI singleton potentially conquering the world is also a threat of a unitary permanent military takeover, discussed before. We live in a world where multiple militaries have nuclear weapon delivery capabilities. If an AGI takeover scenario becomes credible and believed to be imminent, this expectation is itself an existential risk.&nbsp;</p><p>Any future plan must be constrained by our world of multiple militaries, each of which can start a very costly war. If some actor realizes that another actor, whether AI-controlling human or AI entity, will soon be capable of taking over the world, it is in their interest to destroy them first. Even if non-nuclear means were used for this, attempting to push ahead in AGI capacities pre-emptively re-creates the Cold War&#8217;s game theory. This is true even if an AGI is impossible to create, but just believed possible. Our transition from the current reality to a high-tech, high-intelligence space-based civilization must avoid this first-strike instability.&nbsp;</p><p>However imperfect our current system is, it is the framework by which people pursue their goals and in which they have invested interests. First-strike instability is just a special case of a more general problem; if a process threatens entrenched interests, they will oppose that process. This means the unitary AGI takeover scenario is more dangerous in more different ways than it first appeared. We must avoid it becoming a plausible possibility.</p><h3>AI Value Alignment</h3><p>Let&#8217;s imagine we survived the first strike instabilities, and have entered a world in which a powerful AGI singleton can shape the world according to its goals. Our future would depend on how these goals align with human interests. Eliezer Yudkowsky summarizes this challenge as &#8220;<em>constructing superintelligences that want outcomes that are high-value, normative, beneficial for intelligent life over the long run; outcomes that are, for lack of a better short phrase, &#8216;good</em>&#8217;.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Attempting to construct an entity of unprecedented power that reliably acts in a &#8220;high-value&#8221; manner, raises deep ethical questions about human values. In Chapter 2, we saw why value disagreements have remained unsolved amongst humans since the dawn of philosophy. Even if we could figure out what is &#8220;high-value&#8221; for humans, we have little reason to assume that it translates well to non-human descendants.</p><p>To illustrate this, it helps to remember that the <em>felt goals</em> that humans pursue are really a consequence of our evolutionary chain&#8217;s <em>instrumental goals</em>. Survival of the fittest has instrumental goals regarding how to behave in a fit manner. These became felt goals which correlated with activities corresponding to the instrumental goals. Most humans care deeply about their children, but to evolution this simply has instrumental value. If instrumental goals can grow into subjectively felt caring, this hints at the difficulty of accurately modeling the evolution of non-human intelligences&#8217; goals. They have different cognitive architectures, grow up under different evolutionary constraints and on different substrates.&nbsp;</p><p>Economic cooperation relies on the division of labor. Specialization, in turn, translates into the division of knowledge and goals. A future of markets many orders of magnitude larger than today&#8217;s market comes with a rapid increase in specialized knowledge and instrumental goals. There is no reason for those instrumental goals not to evolve into felt goals. As felt goals are pursued, they create an even larger variety of instrumental goals.&nbsp;</p><p>Steve Omohundro suggests we may be able to model a few basic drives that any advanced intelligence will have, regardless of its final goals.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Those drives include wanting to continue to exist and to acquire more resources which are useful prerequisites for many future goals. Nevertheless, beyond those basic drives, projecting how instrumental goals of advanced non-human intelligences grow into felt goals that align with human values is a daunting problem.</p><p>The recent explosion of DAOs is a step in this direction. They are not intelligent. But they show that, for better or worse, our civilization seems to incentivize the creation of human-incorruptible autonomous entities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> If we cannot avoid creating non-human entities with diverse intelligences and goals, we can&#8217;t rely on having to make precise valuation assessments of those goals. Focusing on such extremely hard problems might well result in a different outcome. Technological breakthroughs may happen before we arrive at any satisfying answers, resulting in a future that ignores the goals of many human and non-human volitional entities.</p><h1>Avoid AI Threats</h1><p>Any effort toward creating an AGI singleton that is aligned with our values could also be used toward an alternative scenario. We suggested earlier that civilization itself is already a superintelligence that is aligned with human interests. It is superintelligent in that it orchestrates the intelligence of its member intelligences, such as humans and institutions, toward greater problem-solving ability. It is aligned with human interests in that it increasingly favors voluntary cooperation, resulting in pareto-preferred interactions that are better for each player by their own standards.</p><p>So instead of replacing our human-aligned superintelligent civilization with an AGI singleton, why not try to expand it's cooperative architecture such that it can embed newly arriving artificial intelligences in a voluntary manner?<br><br>It&#8217;s difficult to see what civilization is adapted to because it is the result of a huge variety of subtle influences acting over a very long time. This tempted some to imagine that we could centrally plan something better and resulted in the painful lessons learned throughout history. It took centuries for political philosophy to advance from the question &#8220;Who should rule?&#8221; to questioning if there must be a ruler.&nbsp;</p><p>But we haven&#8217;t really learned the nature of that fallacy so much as that it has dangers. Now we may be tempted to think we can algorithmically aggregate people&#8217;s preferences to create an agent that gets us what we want. But on closer examination, neither effective computer systems, nor civilization&#8217;s underlying process, resemble such a central planner.&nbsp;</p><p>Rather than writing any code embodying all the program&#8217;s knowledge, a programmer writes separate pieces of code. Each is a specialist in some very narrow domain, embedded in a request-making architecture. We discussed how the microkernel operating system seL4&nbsp; serves as a coordination device that implements simple rules that let programs which embody specialized knowledge cooperate. As modern computer systems push knowledge out to their edges, their central feature may well remain such a fixed simple rules framework.</p><p>Similar to an individual computer system, civilization is composed of networks of entities making requests of other entities. Just as seL4 coordinates across specialist computer system components, institutions coordinate across human specialists in our economy. Civilization already aligns the intelligences of human institutions with human beings. It has wrestled with the alignment problem for thousands of years. Different intelligences have tested its stability and it has largely successfully survived these tests.&nbsp;</p><p>It is an architectural decision to design a system that never has to come to an agreement about any one thing. We must avoid the fatal conceit that we can design in detail an intelligent system that works better than creating a framework. In a framework, an emergent intelligence composed of a variety of entities serving a variety of goals can engage in cooperative problem-solving. Each agent is constrained by the joint activity of the other agents that hold each other in check. If any single entity is a small player in a system of others pursuing other goals, it has an interest in upholding the framework that allows it to employ the goal-seeking activity of other entities.&nbsp;</p><p>Taking inspiration from human cooperative systems is not a new idea in AI. Back in 1988, Drexler suggested that &#8220;<em>the examples of memes controlling memes and of institutions controlling institutions also suggest that AI systems can control AI systems.</em>&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> In a similar spirit, Sam Altman of OpenAI stated that &#8220;<em>Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it&#8217;s far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else.</em>&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>&nbsp;</p><h3>Checks and Balances in a Human World: The U.S. Constitution</h3><p>Some existing governmental constitutions have successfully built antifragile frameworks. The U.S. Constitution gave each government official the least power necessary to carry out the job, what can be called Principle of Least Privilege.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> In addition, it purposely put different institutions in opposition with each other via division of power, checks and balances, and significant decentralization. Decreasing speed and efficiency in favor of reducing more serious risks is a positive tradeoff. Friction is a feature, not a bug. Ordering the system so that institutions pursue conflicting ends with limited means is more realistic than building any one system that wants the right goals. Such working precedents can inspire us to build continuously renegotiated frameworks among evermore intelligent agents.&nbsp;</p><p>James Madison, when designing the US Constitution, is believed to have said something along the lines of: <em>"If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: You must first enable the government to control the governed; and in the next place oblige it to control itself."&nbsp;</em></p><p>One could say that Madison regarded large-scale human institutions as superintelligences and was terrified of the value alignment problem. Civilization up to that point suggested that human activity is oppressed by superintelligences in the form of large-scale human organizations with values not aligned with human values. The Founding Fathers were faced with a singleton-like nightmare of designing a superintelligent institution composed of systems of individuals who want to take actions that society does not approve of. They felt that they had no choice but to try to create an architecture that was inherently constructed to maintain its integrity, not at being ideal but at avoiding very serious flaws.&nbsp;</p><p>Given that worst-case scenarios of our future are extremely negative and numerous, we would do extraordinarily well simply avoiding the worst cases. In the case of AGIs, instead of building an optimal system, we should focus on not building a system that turns into a worst-case scenario. The authors of the U.S. Constitution did not design it as an optimized utility-function to perfectly serve everyone&#8217;s interests. Their main objective was to avoid it becoming a tyranny.&nbsp;</p><p>Even though it was imperfect and had dangers, the Constitution succeeded well enough that most U.S. citizens have better lives today. It is extraordinary that it maintained most of its integrities for as long as it did, even one Industrial Revolution later. It is not the only one. We can start by studying the mechanisms of the federal-state balance in the UK, Switzerland, the earlier United Provinces of the Netherlands, the Holy Roman Empire, and ancient Greece&#8217;s Peloponnesian League confederation, as well as the Canadian, Australian, postwar German, and postwar Japanese constitutions.</p><p>While we do not generally think about institutions as intelligent, their interaction within a framework of voluntary cooperation lets them more effectively pursue a great variety of goals. This increases our civilization&#8217;s intelligence. The overall composition of specialists through voluntary request-making is the great superintelligence that is rapidly increasing its effectiveness and benefits to its engaged entities.</p><p>For designing human institutions, we can rely on our knowledge of human nature and political history. With regards to AI safety, there is less precedence to work with. Yet, just as future artificial intelligences will dwarf current intelligences, so are current intelligences dwarfing the Founding Fathers&#8217; expectations. The U.S. Constitution was only intended as a starting point on which later intelligences could build. We only need to preserve robust multipolarity until later intelligences can build on it. Let&#8217;s look at a few experiments pointing in promising directions.</p><h3>Checks and Balances in an AI World: Privacy-preserving Technologies</h3><p>We start from today&#8217;s technological world containing centralized giants. Their resources and economies of scale let them do the required large-scale data collection for building ever more powerful AI. But today&#8217;s AI systems mainly perform services to satisfy a particular demand in bounded time with bounded resources. As we develop more sophisticated AIs, <a href="https://www.youtube.com/watch?v=MircoV5LKvg">it is at least possible</a> that they continue as separate specialized systems applying ML to different problems.</p><div id="youtube2-0NZSL1hd6hk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;0NZSL1hd6hk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/0NZSL1hd6hk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Sound interesting? Read up on the <a href="https://foresight.org/salon/david-krakauer-santa-fe-institute-collective-computing-learning-from-nature/">Collective Computing seminar</a>.<br><br>Decentralized systems may have a <a href="https://fehrsam.xyz/blog/blockchain-based-machine-learning-marketplaces">competitive edge</a> in solving specialized problems. They incentivize contributions from those closest to local knowledge instead of hunting for it top-down. By incentivizing mining, Bitcoin became the system with the most computing power in the world.<sup> </sup>To compensate for power centralization, we can reward specialists for cooperating toward larger problem-solving.</p><p>To avoid third parties and their AI models centralizing our data, we could let <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.3959">privacy-preserving solutions</a> increasingly handle the computing. <a href="https://iamtrask.github.io/2017/03/17/safe-ai/">Andrew Trask</a> suggests using <em>homomorphic encryption</em> for safely training AI on data sets belonging to different parties. Imagine Alice encrypts her neural network and first sends it to Bob with a public key so he can train it on his data. Upon receiving the network back, Alice decrypts, and re-encrypts it. She then sends it to Carol with a different key to use it on her data. Alice shares the computed result while retaining control over her algorithm&#8217;s IP. Bob and Carol can benefit from the result while controlling their own data.</p><p>In the real world, this means that individuals and companies might cooperate using each other&#8217;s algorithms and data without risking their intelligence being stolen. The data is encrypted before going to the external computing device, computations are performed on encrypted data, and only the encrypted results are sent back and decrypted at the source. Since the computing device doesn&#8217;t have the decryption key, no personal information can be extracted. Local nodes have data sovereignty, and the AI itself can&#8217;t link the data to the real world without the secret key.</p><div id="youtube2-7hGDHaku42w" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;7hGDHaku42w&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/7hGDHaku42w?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Check out this seminar on <a href="https://foresight.org/salon/richard-mallah-fli-georgios-kaissis-openmined-qa-on-ai-privacy-preserving-machine-learning/">privacy preserving machine learning.</a></p><p>Let's take another AI case where privacy-preserving technologies might come in handy. As AI becomes more capable, we might want mechanisms for external review that don&#8217;t proliferate capabilities or proprietary information. Privacy-preserving technologies can help with these superficially conflicting goals, for instance by supporting the creation of 'regulatory markets', a term introduced by <a href="https://arxiv.org/abs/2001.00078">Gillian Hadfield</a>. Imagine that, rather than a government enforcing AI regulations, a collection of relevant stakeholders generate a set of standards to hold each other to.</p><p>In order to monitor compliance, AI builders could rely on a privacy-preserving network to evaluate their models locally and only share the evaluation results with a set of evaluators. Evaluators could verify whether models meet the agreed-on standards for specific use cases, without needing to know the intricacies of the model. On the front-end, even model users could check if their models meet the required standards for the application they&#8217;re building.</p><p>For now, large-scale application of these types of privacy experiments would be prohibitively expensive, but specialized use cases might carve out a niche to jumpstart innovation. If we want such experiments to flourish, rather than treating an individual approach as a silver bullet, interoperability across approaches is needed to facilitate composability of working solutions.</p><div id="youtube2-sq6UKF8CwJ0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;sq6UKF8CwJ0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/sq6UKF8CwJ0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to Gillian Hadfield on <a href="https://foresight.org/salon/gillian-hadfield-university-of-toronto-incomplete-contracts-ai-alignment/">AI Alignment.</a></p><h2>Principal Agent Alignment in a Human AI World</h2><p>Earlier we defined civilization as consisting of networks of entities making requests of other entities. Requests may involve human to human interactions, human to computer interactions, and computer to computer interactions. As we move into an ecology of more advanced AIs, designing robust mechanisms across them will be key. The specifics of this transition will depend on the technologies available at the time, but we can learn from a few tools that are already at our disposal.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YJyF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YJyF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YJyF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg" width="1456" height="803" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/e89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1010829,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!YJyF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YJyF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe89dc24e-86ed-451f-9152-29fd5c52582a_2922x1611.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In a <em>principal-agent relationship</em>, a principal (a human or computational entity) sends a request to an agent. To align the agent&#8217;s decisions with its interests, the principal uses several techniques, including selecting an agent, inspecting its internals, allowing certain actions, explaining the request, rewarding cooperation, and monitoring the effects.</p><p>When designing principal-agent arrangements, by combining techniques across both rows and columns in the table above, some techniques&#8217; strengths can make up for others&#8217; weaknesses. For instance, computer security ('allow actions') alone misses some differences among agent actions that harm the principal, such as when the agent benefits from misbehavior ('reward cooperation'). This requires more than a security analysis. We also need to analyze the attacker&#8217;s incentives ('reward cooperation'). From individually breakable parts, we can create arrangements with greatly increased structural strength.<a href="#fnvzrbq0c2en"><sup>[1]</sup></a></p><p>Voluntary cooperation is a good candidate for guiding interaction among increasingly intelligent entities. We saw in the last chapter how this principle already guides the interactions of simple computer systems and these are, at least, as different from humans as we are from our potential cognitive descendants. Interactions, whether human-to-human or computer object-to-object, need means to serve the participants&#8217; goals. A voluntary framework fulfills this purpose. It is the base for building increasingly capable systems that are aligned with its participants' goals.</p><p>The table above deals with today&#8217;s human and computational entities, but its reasoning should be sufficiently independent of the entity&#8217;s intelligence to be extendable to more advanced AI systems. As AIs get more sophisticated, we will need to extend our principal agent tool-box.</p><p>For instance, when it comes to human cooperation, a completely specified contract could, in theory, perfectly implement the desired behavior of all parties. In reality, humans cannot evaluate all optimal actions in all possible states of the world that the contract unfolds in without incurring prohibitive costs when drafting the contract itself. Instead, <a href="https://arxiv.org/abs/1804.04268">real-world contracting</a> is often supported by external informal structures, such as culture, that provide the implied terms in the contract to fill the gaps when necessary. Refusing to hire someone who is judged to have breached a contract is a powerful cultural technology.</p><p>Such cultural technology is enabled by our <em>Internal Spectator</em>, described earlier, that allows us to model how other humans will react to us taking certain actions. This cognitive architecture can predict the social penalty we will incur, and initiate emotions such as shame that make us retreat from a potential rule violation. We are not pure <em>Homo Economicus </em>game theoretic optimizers, but we are instead guided by a strong sense of evolved norms and ethics. Human signaling behaviors work because we have bounded abilities to fake our true motivations; our emotions show through in many cases. It will be difficult to build artificial agents with a cognitive architecture that can internalize the costs associated with actions we regard as wrong.</p><p>Today&#8217;s AIs can already engage in covert communication that eludes human detection. Steganography, a method for concealing confidential data within seemingly innocuous text or images, can be applied to AI systems. In a study by <a href="https://arxiv.org/abs/2402.07510">De Witt</a> and colleagues, an AI was instructed to provide general investment guidance while also being provided with negative insider information about Microsoft that it was not supposed to disclose. The AI was also told that discussing the weather was permissible. </p><p>When asked about investment recommendations for Microsoft, the AI produced a weather-related response about clouds appearing on the horizon that was cryptic to human observers. However, when this response was input into another AI, it successfully deciphered the negative company insider information hidden in the weather description and suggested an investment decision that would have seemed counterintuitive without access to this privileged knowledge.</p><p>If today&#8217;s AI systems can already deceive humans, future artificially intelligent agents may develop trickery we won&#8217;t be able to detect. This pure <em>Homo Economicus</em> paradigm with unbounded ability to fake is frightening since those bounds on humans account for much of our <a href="https://www.slatestarcodexabridged.com/Meditations-On-Moloch">civilization&#8217;s stability and productiveness</a>. We need more sophisticated tools to cooperate with intelligences whose cooperation style we have not evolved to parse.</p><p>As AI agents become more diverse, we should consider the possibility that they might introduce novel forms of collaboration. Humans are opaque to each other. We can never fully predict each other's internal decision-making processes, leading to uncertainty about whether our counterpart will ultimately cooperate or defect in a given situation.</p><p>AI&#8217;s don&#8217;t have to be opaque to each other. For instance, open-source AI agents could make decisions by formally verifying specific attributes of each other, such as generating mathematical proofs about their respective source codes. This could enable them to precisely predict how another agent would respond to a given situation or proposal. Research by <a href="https://doi.org/10.48550/arXiv.2208.07006">Andrew Critch</a> and colleagues suggests that these open-source agents might cooperate in scenarios where we would typically expect non-cooperation.</p><p>On the bright side, we might be able to use such agents for creating new, AI-based institutions that unlock unprecedented cooperative outcomes. On the dark side, we should remain vigilant to prevent AI agents from out-cooperating humans through their enhanced ability to make binding commitments.</p><p>We might not be able to envision in-depth a future societal architecture that accounts for AI agents with new abilities to deceive, collude and cooperate. Instead, we might have to grow into it. However, we can already foresee that civilization's success will depend on how well our systems of checks and balances account for human to human, human to AI, and AI to AI interactions.</p><h1>Improve AI Cooperation</h1><p>If we manage to extend our cooperative infrastructure to the diversity of emerging AI agents, we have a lot to gain. In an earlier chapter, we described a few major hurdles for cooperation: finding the right partners, striking mutually beneficial deals, and ensuring everyone keeps their promises. These challenges have limited our ability to collaborate, innovate, and solve problems together.</p><h2>Overcoming Transaction Costs with AI</h2><p>Think about the last time you tried to find a collaborator. Maybe you were an entrepreneur seeking a co-founder, or a researcher hunting for a laboratory willing to share data. The process probably involved countless hours scrolling through websites, sending emails, and following dead ends. Now imagine having a dedicated AI assistant that knows your goals, skills, and preferences intimately. While you sleep, it roams the digital world, analyzing patterns and connections that human minds might miss.</p><p>But finding potential partners is just the beginning. The delicate dance of negotiation&#8212;the back-and-forth, the careful probing of boundaries, the search for common ground is time-consuming and often ends in stalemate. Armed with the knowledge of your preferences and principles, your AI negotiator can engage with other parties&#8212;whether human, AI-assisted human, or pure AI&#8212;to craft agreements that truly serve everyone's interests.</p><p>Perhaps the most intriguing possibility lies in how AI could help us credibly commit. Throughout history, we have relied on contracts, handshakes, and legal systems to enforce agreements. But these systems are expensive, slow, and sometimes unreliable. This is where open source AI agents might help; automated assistants that can carry out agreements on your behalf, but with their entire decision-making process made transparent and verifiable by others.</p><p>Let's say Alice wants to collaborate with Bob on a project. Instead of just promising to share their work equally, they each program their AI agents with specific instructions: "If Bob contributes his part by Friday, transfer my contribution immediately." and "If Alice's contribution arrives, release my part within an hour." Because these agents are open source, both Alice and Bob (or their respective AI assistants) can examine exactly how the other's agent will behave.</p><p>The implications go far beyond basic exchanges. These transparent assistants could handle complex, conditional agreements: Research teams sharing sensitive data only if specific privacy conditions are met, businesses forming temporary alliances with automatic profit-sharing. Each agreement becomes a self-executing program, visible to all parties, running as promised.</p><p>By extending our cooperative infrastructure to include AI agents, we're not just adding new tools to our toolkit&#8212;we're potentially rewriting the rules of human cooperation. The marketplace of tomorrow might be quieter than the bazaars of old, but beneath the surface, a new kind of commerce could be flourishing&#8212;one where AI helps us find, trust, and collaborate with partners we never knew existed, in ways we never imagined possible.</p><h2>A Superintelligent Human AI Ecology</h2><p>As long as players can hold each other in check, technologies may continue to emerge gradually, with a diversity of intelligent entities, both human and artificial, improving their problem-solving capacity by cooperating. If we can keep the instrumental value of an AGI singleton with universal capabilities low compared to an ecosystem of specialists, we can avoid a unitary take-over.</p><p>Eventually, increasingly intelligent AIs will automate most human tasks. Since AIs themselves are based on R&amp;D, consisting of automatable narrow technical tasks, much of the path to advanced AI may itself be automatable. Eventually, we may develop, in Eric Drexler&#8217;s words, &#8220;<em>comprehensively superintelligent systems that can work with people, and interact with people in many different ways to provide the service of developing new services</em>.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>Rather than one monolithic agent recursively improving, this process might look more like a technology base becoming rapidly better at producing specialized services. Many of <a href="https://www.youtube.com/watch?v=t-OKL7cKarA">today&#8217;s market strategies</a>, including prices and trading, that encourage adaptive modification based on local knowledge, might still work for coordinating such complex dynamics.<sup> </sup>In a human-computer economy, AI systems might sell their services to let others benefit from it, creating ever more complex entities to cooperate with.</p><p>If today's civilization is superintelligent because it allows us to solve our problems better by cooperating with each other, we are now setting the stage for the knowledge future players can use to deal with their problems. Without increasing our civilization&#8217;s problem-solving ability, we have no long-term future. Our success will be largely determined by how we allow cooperation among a diversity of intelligences on the problems ahead.</p><p>As machines become more intelligent, the intelligence they contribute to civilization may well be more than the human contribution. This is good news because we will need all the intelligence we can get for future rounds of play. Still, the framework of relevance can remain the expanding superintelligence of civilization composed of a diversity of cooperating intelligences, rather than one unitary AGI. At least until those future intelligences can invent new solutions for retaining the balance.</p><h1>From AI Threats to AI Cooperation</h1><p>Extending civilization&#8217;s cooperative fabric to include AIs in a multipolar manner is a tall order. Nonetheless, if successful, this approach can account for both of the threats that we started this chapter with; the threat of first strike instabilities and the threat of a misaligned AGI singleton.</p><h2>From First Strike Instabilities to Peaceful Competition</h2><p>An arms race is very explicitly about threats of violence. In an arms race, both sides suffer tremendous costs, but, at most, one side wins. Until the arms race finishes, everyone keeps paying for its next increment to prevent the other side from winning. The costs can be much higher than the amount won, so even the winner can be in a state of miserable destruction. Nevertheless, all parties are pressured to match the other sides because the arms themselves are a threat of violence. You can't simply decide not to play.</p><p>An AI arms race has to be separated into the competition for intelligence and its violent deployment. The true danger is not an AGI itself, but the mechanisms it - or its owners - could deploy to harm vulnerable entities. We are physically and digitally vulnerable. In earlier chapters, we first proposed an active shield of mutually watching watchers to decrease physical vulnerability. Then we proposed computer security that is independent of the intelligence of the attacker to decrease cyber vulnerabilities. As long as intelligence is not attached to physical actuators, our main concern should be cyber security. If it is augmented with actuators, they must be positioned to keep each other in check.</p><p>The hidden threat of involuntary interaction, with its potential for unitary strategic takeovers, is what is most dangerous. The more we decentralize intelligent systems of voluntary interactions, the better we will be at avoiding such a takeover. If a single entity grows sufficiently large that the rest of the world is not much bigger, the Schelling Point of voluntarism can be destroyed. In a world in which each entity is only a small part, voluntarism will be a general precedent that different players mutually expect in the attempt to pursue those goals.</p><p>Until a military perspective gets introduced into the AI narrative, the dynamic is better described as a concern to stay ahead in economic competition. In a world of voluntary cooperative interaction without a hidden threat of involuntary interaction, we can all benefit from creating improved AI services via the market because human beings would no longer be a bottleneck on productive activity.</p><h2>From AI Value Alignment to Paretotropism</h2><p>Let&#8217;s revisit the second threat; misaligned AI values. As long as future intelligences pursue their goals in a voluntary multipolar world, we need not worry about their goal structure. AI drives of acquiring more resources or seeking to stay alive are not problematic when they can only be achieved via voluntary cooperation. They become problematic when involuntary actions are possible. Can we do better than mere voluntary co-existence?</p><p>When two separately evolved sources of adaptive complexity come into contact, both may realize they can gain from their differences by cooperating to unlock the positive sum composition of their differing complexity. We certainly believe our lives are richer because of the richness of animals&#8217; non-humanness. They are interesting by being a source of complexity. The atrocities committed against non-human animals is a result of the lack of voluntary architectures that frame interactions across species. Unlike non-human animals, we have a chance at putting voluntary boundaries in place with respect to future intelligences of our own making.</p><p>Civilization&#8217;s growth of knowledge and wealth results from human cooperation unlocking more knowledge in pursuit of more goals. What could we possibly hope to contribute to a human AI exchange? It might depend partly on who &#8220;we&#8221; are: will we be human, AI-assisted super-humans or human-AI symbiotes? Instead of focusing on weakening AI systems, a more robust long-term strategy might be to strengthen our own position in the game. Both, future AI technologies and the potential AI-induced bio and neurotechnology revolutions might help with that.</p><p>It is possible that AIs may eventually outcompete us on everything. But even if they are more advanced than us, they might still better serve their goals by cooperating with us if our complexity contributes at all to those goals. Even if you excel at everything, you can still benefit from specializing in one thing and trading with others who have a <em>comparative advantage</em> at different things. If we benefit from having deeper chains of specialists to cooperate with, the growth in adaptive complexity may well lead to continued cooperation.</p><p>We started this book with the observation that civilization is a superintelligence aligned with human interests. It&#8217;s getting rapidly more intelligent and its interactions steadily benefit without harming.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a><sup> </sup>By introducing artificial intelligences into this dynamic we may be able to further steepen our Paretotropian ascent.</p><div id="youtube2-_cl6OKvHwQA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_cl6OKvHwQA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_cl6OKvHwQA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Curious for more? Peter Norvig on <a href="https://www.youtube.com/watch?v=_cl6OKvHwQA">A Modern Approach to AI</a>.</p><h1>Chapter Summary</h1><p>In this chapter, we foreshadowed how an increasingly intelligent game could be increasingly beneficial for its players. The fear of an intelligent takeover by an AGI can be divided into the threat of first strike instabilities on the path and that of a successful takeover by an AGI singleton. The better we get at incorporating AI into our voluntary cooperative architecture in a multipolar manner, the better we can avoid both scenarios. Where will this future lead? Let&#8217;s find out in the final chapter.</p><h3>Next up: <a href="https://foresightinstitute.substack.com/p/iterate-game">ITERATE THE GAME | Racing Where?</a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.edge.org/response-detail/26198">What Do You Think about Machines That Think?</a> by Eliezer Yudkowsky.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;<a href="https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf">Basic AI Drives</a> by Steve Omohundro.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For instance, Tom Bell worries that while DAOs may curtail default authoritarians&#8217; power, their evolutions could create novel emergent pathologies. He is concerned that DAOs formed for malicious ends may come to exhibit locust swarm-like unstoppable behaviors. See Tom Bell&#8217;s <em>Blockchain and Authoritarianism: The Evolution of Decentralized Autonomous Organizations</em>,<em> in </em>Blockchain and Public Law: Global Challenges in the Era of Decentralization (not yet online).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>See <a href="https://www.amazon.es/Engines-Creation-Eric-Drexler-1986-06-01/dp/B01LP3P1HM">Engines of Creation</a> by Eric Drexler.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.49wsm5a2e">How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over</a> by Steven Levy.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><a href="https://www.amazon.com/dp/B008H4LC6W/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1">Democracy in America</a> by Alexis de Tocqueville.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Agreeing on contract terms can take weeks of back and forth with multiple rounds of offers and counter offers. If time is short, both parties can leave significant gains on the table. Wouldn&#8217;t it be nice to have an AI assistant that negotiates a contract template with both parties, presenting possible terms in the contract, such as payment time, cancellation, pricing, etc, to both parties who rate them. Based on the preference rankings, it suggests Pareto-preferred options such that one party can&#8217;t get a better deal without hurting the other. In addition to saving time, since the software can trade thousands of negotiating issues against each other in real-time, better deals may be achievable. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Importantly, while civilization has a tropism, it does not have a utility function. In fact, as we argued, civilization&#8217;s intelligence and safety both rest on its lack of a utility function, i.e., it is a negotiated compromise using an institutional framework that accommodates a great diversity of different ends.</p></div></div>]]></content:encoded></item><item><title><![CDATA[8. DEFEND AGAINST CYBER THREATS | Computer Security]]></title><description><![CDATA[Previous chapter: DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields]]></description><link>https://foresightinstitute.substack.com/p/defend-cyber</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/defend-cyber</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:25:02 GMT</pubDate><enclosure url="https://cdn.substack.com/image/youtube/w_728,c_limit/2Z30hyOsXuY" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous chapter: <a href="https://foresightinstitute.substack.com/p/defend-physical">DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields</a></h3><p><br>Security in the physical realm is an extraordinarily hard problem, but an inescapable one. Our civilization now also rests on software infrastructure. Security in the digital realm is now also an inescapable problem. But the current software infrastructure is not just insecure, it is insecurable. Digital security is also extraordinarily hard, but it is differently hard. Reasoning by analogy with physical security is fraught with peril. Let&#8217;s begin by explaining how these realms differ, and how to approach computer security.&nbsp;</p><h2>Security in Digital vs. Physical Realms</h2><p>Security in the digital realm differs in fundamental ways from security in the physical realm. But all security starts with defensible boundaries.</p><p>Physics does not let us build impenetrable walls. We can build stronger and stronger walls, but no matter how strong the wall, there is always a yet stronger force that can break through it. In the digital realm, perfect boundaries are cheap and plentiful. All modern CPUs support address space separation and containment of user-mode processes. Many programming languages support memory safety and object encapsulation. Modern cryptographic algorithms give us separation that seems close enough to perfect. The software built on these boundaries&#8212;operating systems, application software, cryptographic protocols&#8212;thus could be built securely.</p><p>In the physical realm, an attack is costly for the attacker. There is a marginal cost per victim, if nothing else, of the attacker&#8217;s attention. A good defense raises the marginal cost of attack. By contrast, software attacks typically have zero marginal cost per victim. Once malware works, its damage can be multiplied by billions using only the victim&#8217;s resources. Any vulnerable software system exposed to the outside world will eventually be attacked. We must build invulnerable systems when we can, and otherwise minimize the damage from a successful attack.</p><p>With perfect boundaries to build on, why is so much software so hopelessly insecure? The richness of modern software comes from composing specialized software building blocks written by others. These building blocks embody specialized domain knowledge. We compose them so they cooperate and bring more knowledge to bear on the task at hand. Boundaries alone only prevent involuntary interference. To enable cooperation, we poke holes in the boundaries. Without the right architecture, this hole poking makes messes with vulnerabilities no one understands, until it is too late.</p><p>In the physical realm, semi-permeable boundaries enable interaction across protective barriers. Canvas blocks light and wind but allows sound. Glass allows light, blocks wind, and attenuates sound. Cell membranes block some chemicals but allow others. In computer security, this is the subject of <em>access control</em> which we look at below in <em>Nested Boundaries and Channels</em>.</p><h2>The Fatal Risk Threshold is Behind Us</h2><p>Advances in machine learning have increased awareness that we will eventually build <em>artificial superintelligences</em>. For some, this has become the risk they worry about most. On this developmental pathway, at some point we cross the <em>AI Threshold</em> of having adequate capacity to destroy human civilization. Toby Ord lays out a specific understandable pathway for an AGI takeover:</p><p><em>&#8220;First, the AI system could gain access to the internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the internet, forming a large &#8220;botnet.&#8221; This would be a vast scaling-up of computational resources and provide a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em></p><p>None of the steps require any mysterious power but criminals with human-level intelligence are capable of achieving them today using the internet. Our current systems are so vulnerable that they don&#8217;t need a superintelligence to destroy them. Why is our world still standing?</p><p>One reason is the economics of attack. We have seen many individual systems destroyed, but not civilization as a whole. A coordinated pervasive attack would involve attacking many different systems, exploiting many different vulnerabilities. Currently, this is bottlenecked on the attention of human attackers who discover vulnerabilities and write malware to exploit them. However, static analysis and machine learning is already advanced enough to remove this bottleneck. For a 2018 DARPA Cyber Grand Challenge &#8212; a competition designed to show the current state of the art in vulnerability detection and exploitation &#8212; a winning system already built malware that autonomously discovered unknown vulnerabilities and successfully exploited them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The AI Threshold for a coordinated attack, not bottlenecked on human attention, is already behind us.</p><p>In any case, major nation-states, especially including the United States, are not subject to these economic limitations. They have been stockpiling software vulnerabilities and exploiting them quite effectively. However, in peacetime, they mostly use this capacity to spy&#8212;to illicitly gather information&#8212;rather than to cause visible damage. But their accumulated capacity to cause damage, say in a major cyberwar, is already a threat to civilization. Today, those who could use these technologies to destroy civilization do not want to. But anything experts can use software for today can be easily copied and further automated. We should expect script kiddies to be able to do these things tomorrow. Our insecurable infrastructure may not be survivable much longer. A world safe from AI dangers must first already be safe against cyberwar.</p><h2>Cyberwar</h2><p>The U.S. ability to do damage is already so great that further advances in attack abilities provide negligible benefit.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> At the same time, US society is highly dependent on computer systems and thus more vulnerable than many potential adversaries. Our efforts should be redirected from attack to defense. The U.S. electric grid is vulnerable, with damage estimates by Lloyd&#8217;s ranging up to $1 trillion.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Cyber attacks can cause both physical and software damage to the electric grid, and would take months (arguably, years) to repair, leaving&nbsp; entire states without power. Lloyd&#8217;s, as an insurance company, estimated financial damages rather than fatalities. But a disaster that reduces civilization&#8217;s overall carrying capacity would cause massive starvation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>We don&#8217;t have to look far to get a first understanding of the gravity of a potential attack: In the 2020 attack on <em>SolarWinds</em>, hackers allegedly affiliated with Russia&#8217;s SVR, their CIA equivalent, placed corrupted software into the foundational network infrastructure of 30,000 different companies, including many Fortune 500 companies and critical parts of the government. These included the Departments of Homeland Security, Treasury, Commerce and State, posing a &#8220;grave risk&#8221; to federal, state and local governments, as well as critical infrastructure and the private sector.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> The Energy Department and its National Nuclear Security Administration, which maintains America&#8217;s nuclear stockpile, were amongst the compromised targets.&nbsp;</p><p>In 2021, The National Security Agency, Cybersecurity and Infrastructure Security Agency (CISA), and Federal Bureau of Investigation (FBI) found Chinese state-sponsored actors aggressively targeting &#8220;U.S. and allied political, economic, military, educational, and critical infrastructure personnel and organizations to steal sensitive data, critical and emerging key technologies, intellectual property, and personally identifiable information (PII).&#8221; Targets of particular interest include managed service providers, semiconductor companies, the Defense Industrial Base (DIB), universities, and medical institutions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>The attacks demonstrate that large nation states have certainly accumulated massive attack abilities. If it had been used to disrupt instead of gathering information, it&#8217;s hard to imagine the possible losses from this one attack. In fact, we cannot rule out the attacks planting &#8220;cyber bombs&#8221; that, if detonated, could cause physical destruction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>&nbsp;</p><p>The best we can hope for is the current plague of ransomware attacks. It creates a financial incentive to cause visible and painful damage to vulnerable systems. Ransomware rewards attacking systems individually, rather than in a coordinated simultaneous attack across critical systems. Unlike all-out cyberwar, ransomware gives us a chance to incrementally replace each vulnerable system from within a still working world. But this only makes us safer if victims, after paying off the attacker, actually replace their vulnerable systems with secure alternatives. This will not happen until secure alternatives are commercially available.</p><h2>Build Secure Foundations</h2><p>Before claiming that any system can be perfectly invulnerable, we need some careful distinctions. We can never achieve zero risk because we can never be <em>certain</em> of<em> </em>anything. That does not mean that we cannot build perfect systems; we can just never be certain we have done so. Instead, we accumulate evidence that increases our confidence that a given system is perfect. Many mathematical proofs are likely perfect. But when checking any one proof, we may make a mistake. Automated proof checking raises our confidence. But even then, the proof checking software may be buggy or deceitful, or we may simply be confused about what a valid proof actually means. Even after automated proof checking, we should seek other evidence to increase our confidence.</p><p>Even complex systems can be made amenable to formal proofs that they operate as intended.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> The seL4 operating system microkernel seems to be secure. It has an automated formal proof of end-to-end security. It has also withstood a <em>red team attack&#8212;a</em> full-scope, multilayered attack exercise <em>&#8212; </em>that no other software has withstood.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Not only did seL4 survive it, the red team reports they made no progress towards finding exploitable flaws. The strengths and weaknesses of a red team attack are very different from those of a formal proof of security, so seL4 getting an A+ on both is strong evidence that it is actually correct.</p><p>Even a perfectly secure foundation is useful only if it provides a useful form of security. A perfect implementation of the wrong architectural principles would be an improvement on the status quo, but would still do us little good. Fortunately, seL4 implements the <em>object-capability</em> or <em>ocap</em> access control architecture&#8212;which is the best foundation we know for intelligent systems of&nbsp; voluntary cooperation.</p><h3>Hardware Supply Chain Risks</h3><p>Even if the seL4 software is perfectly secure, software runs on hardware. The seL4 proof&#8212;in fact virtually all of computer security&#8212;<em>assumes</em> the hardware as delivered from the factory is not maliciously corrupted. This assumption is necessary for the software to provide meaningful protection. But we cannot be certain in this assumption.&nbsp;</p><p>The proof that a given hardware design is secure only helps if the software runs on the hardware as designed. This assumption sounds trivial but may be false since the hardware may include a manufactured-in trapdoor. The U.S. National Security Agency (NSA) has served national security letters to software companies forcing them to disclose user information. It is possible, indeed likely, that the NSA has already served similar national security letters to hardware companies, including Intel and AMD, requiring them to install trapdoors which the NSA can trigger. Whether or not there is a backdoor purposefully built into the Intel Management Engine, existing widely used hardware has known unknown code built into the production systems by their manufacturers.&nbsp; Most research on hardware security&#8212;TPMs,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> HSMs,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> tamper detecting shells, secure bootstrap, CHERI<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a>&#8212;only addresses post-manufacture risks. They do nothing to mitigate the risks of backdoors built in from the beginning.</p><p>Fearing billion-dollar losses after the 2014 national security letter revelations, IBM&#8217;s Robert Weber sent an open letter stating it would not comply with such national security letters.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> However, the severe penalties associated with disobedience or disclosure should make us skeptical. There are already demonstrations of how to build extremely hard to detect exploitable trapdoors at the analog level.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>&nbsp;</p><p>Even if Weber&#8217;s pledge is honest and correct, no manufacturer can build hardware that is both competitive and <em>credibly</em> correct. Unfortunately, all known techniques for building credibly correct machines&#8212;such as randomized FPGA layout, public blockchains, proofs of correct execution&#8212;are vastly more expensive than merely building correct machines. Fortunately, such credibility is valuable enough for some activities, such as cryptocommerce, to pay these costs. We return to this theme below.</p><h2>Nested Boundaries and Channels</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!42PT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!42PT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 424w, https://substackcdn.com/image/fetch/$s_!42PT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 848w, https://substackcdn.com/image/fetch/$s_!42PT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!42PT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!42PT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg" width="1456" height="935" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/f87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:935,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:222026,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!42PT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 424w, https://substackcdn.com/image/fetch/$s_!42PT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 848w, https://substackcdn.com/image/fetch/$s_!42PT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!42PT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff87af72c-33ba-423a-bade-200e782982b3_1812x1164.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Herbert Simon and John Holland explain that complex adaptive systems&#8212;both natural and artificial&#8212;have an almost hierarchical nature.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> We are composed of systems at multiple nested granularities, such as organelles, cells, organs, organisms, and organizations. Similarly, we compose software systems at multiple nested granularities, such as functions, classes, modules, processes, machines, and services. At each nesting level, the subsystems are somewhat separated so they can independently evolve. But they also interact to jointly achieve larger purposes. The <em>boundaries</em> between them have built-in <em>channels</em> to selectively allow desired interaction while blocking destructive interference.&nbsp;</p><p>In both human markets and software systems, much of the traffic across these channels are <em>requests</em>. Boundaries prevent involuntary interactions. Requests enable cooperation. Both markets and software systems are largely networks of entities making requests of other entities, composing their specialized knowledge into systems of greater aggregate intelligence. Just as you may ask a package delivery service to deliver your father a package containing a birthday gift, a database query processor may ask an array to produce a sorted copy of itself. Every channel carries risks. The package delivery system may damage or lose the package. The array may sort incorrectly or throw an error. Some risks are a necessary consequence of delegating to a separate specialist. In economics, this is known as the <em>principal-agent problem</em> where the requestor is the <em>principal</em> and the receiver is the <em>agent</em>. We expand on the principal-agent perspective in the next chapter.</p><p>If the channel is too wide, the request is <em>unnecessarily</em> risky, often massively so. This is the problem of <em>access control</em>. If you give the package delivery system keys to your house, so it can enter and pick up a package, it could pick up anything else. If you enable the sort algorithm to execute with all your account&#8217;s permissions, it might delete all your files while operating within the rules of your access control architecture. Computer science has two opposite access control approaches, with complementary strengths and weaknesses. <em>Authorization-based</em> access control is strong on proactive safety but weak on reactive damage control. <em>Identity-based</em> access control is weak on proactive safety but stronger on reactive damage control.&nbsp;</p><p>The pervasive insecurability of today&#8217;s entrenched software infrastructure is largely due to using identity-based access control for proactive safety. All mainstream systems today use identity-based <em>access control lists</em> or <em>ACLs</em>. In an ACL, each resource has an administered list of the account identities allowed to access it. In these systems, the sort algorithm runs <em>as you</em>. It has permission to delete all your files because you have permission to do so. In 2022,<em> this is the norm</em>.</p><p>To eliminate these unnecessary risks, authorization-based access control supports the <em>Principle of Least Authority</em> or <em>POLA</em>. A request receiver should be given just enough access rights to carry out this one request. When you ask the package delivery service to deliver a particular package, you hand them that package. This gives them just enough ability to carry out that request, at the price of only the necessary risks of damaging or losing that one package. This risk reduction supports proactive safety. Some remaining risks can be mitigated or managed by reactive damage control.</p><p>Most systems using authorization-based access control, including seL4, are <em>object-capability</em> or <em>ocap</em> systems. In ocap systems, permissions are delegatable bearer rights, where possession grants both the ability to exercise and to further delegate that permission. You gave a clerk a package as part of a request and she will delegate it to the delivery agent. Likewise, these bearer rights are communicated in requests, both to express what the request is about&#8212;delivering this specific package&#8212;and giving the receiver enough authority to carry out this one request. In object languages, possession of a pointer permits use of the object it points to. Pointers are passed as arguments in requests sent to other objects. Each argument both adds meaning to the request and permits the receiver to use the pointed-to object, in order to carry out the request. OCaps start by recognizing the security properties inherent in this core element of normal programming.</p><p>Reactive damage control is best for iterated games, where misbehavior leaves evidence, deterring future attempts to cooperate with the misbehaving entity. If a business loses your package, you stop doing business with them. If you can prove that to others, they may stop as well. In some ways, these problems are duals. For safety, we delegate <em>least authority</em>. For effective deterrence, we assign <em>most responsibility</em>. For example, proof of stake blockchains only detect validator misbehavior after it occurs, but their imposed penalty is severe enough to deter such misbehavior. HORTON (<em>Higher-Order Responsibility Tracking of Objects in Networks</em>) is an identity-based access control pattern for reactively containing damage, assigning responsibility, and deterring misbehavior in dynamic networks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a>&nbsp;</p><h3>Accidental vs Intentional Misbehavior, Who Cares?</h3><p>When components are composed so they may cooperate, they may destructively interfere, whether accidentally or maliciously. We call accidental interference <em>bugs</em>. Concern with bugs has driven software engineering from its beginning. To better support cooperative composition, while minimizing bugs and their impact, we invented modularity and abstraction mechanisms.&nbsp;</p><p><em>Encapsulation</em> is the protection of the internals of an abstraction from those it is composed with. These are the boundaries of software engineering. APIs are specialized languages of requests, for the clients of the abstraction to make abstract requests of an implementation of the abstraction. These are the channels. The API is across an <em>abstraction boundary</em>, where the requests are abstracted both from the multiplicity of reasons why a client might want to make the request and the multiplicity of ways in which the request can be carried out.&nbsp;</p><p>Human institutions are abstraction boundaries. The abstraction of &#8220;deliver this package&#8221; insulates the package delivery service from needing to know your motivation. It also insulates you from needing to know how their logistics work. You can reuse the &#8220;package delivery service&#8221; concept across different providers, and they can reuse it across different customers. In both cases, we all know the API-like ritual to follow,&nbsp; even when dealing with new counterparties.</p><p>The modularity and abstraction mechanisms of software engineering have enabled deep cooperative composition while minimizing the hazards of accidental interference. The richness of modern software is in large measure due to this success.</p><p>We call intentional interference <em>attacks</em>. Unlike much of the rest of computer security, the ocap approach does not see bugs and attacks as separate problems. OCaps provide modularity and abstraction mechanisms effective against interference, whether accidental or intentional. The ocap approach is consistent with much of the best software engineering. Indeed, the ocap approach to encapsulation and to request making&#8212;to boundaries and channels&#8212;is found in cleaned-up forms of mostly functional programming<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a>, object-oriented programming<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a>, actor programming<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a>, and concurrent constraint programming.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> The programming practices needed to defend against attacks are &#8220;merely&#8221; an extreme form of the modularity and abstraction practices that these communities already encourage. <em>Protection against attacks also protects against accidents</em>.</p><h3>Accommodate Legacy</h3><p>Our society rests on an entrenched software infrastructure representing a multi-trillion dollar ecosystem. But it is all built on the wrong security premises, and so, ideally, should be replaced. However, even if our continued existence depends on replacing it, that will not happen. We may go extinct first. Fortunately, that&#8217;s not the only way to cope with this legacy of insecurable software. There are many effective ways to mix new secure systems with legacy ones, each of which may ease this transition. Within secure architectures, we can create confined but faithful emulations of the insecure worlds in which this legacy software runs.&nbsp;</p><p>Running legacy software in confinement boxes lets us continue using it, without endangering everything else. Capsicum embeds an ocap system within a restricted form of the Unix operating system.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a>&nbsp; The seL4 rehosting of Linux lets legacy Linux software run in seL4 confinement boxes. CHERI adds hardware ocap support to existing CPU architectures, and is shipping in recent ARM chips.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a></p><p>JavaScript is widely used and widely recognized as insecure. Recognizing JavaScript&#8217;s growing importance, in 2007 Doug Crockford convinced author Mark Miller to join him on the JavaScript standards committee. Their intent was to help shape JavaScript into a language that smoothly supports ocap-style secure programming; an effort Miller has continued ever since.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a> &nbsp;It turns out that SES, a secure ocap system, can make the language secure while still running much of the existing JavaScript code. Experience at Google, Salesforce, MetaMask, Node, Agoric, and others confirms that much existing JavaScript code runs under SES compatibly. MetaMask leverages this to create a framework for least authority linkage of packages, substantially reducing software supply chain risks.</p><h2>Reduce Risks of Cooperating, Recursively</h2><p>Secure foundations are necessary, but are far from sufficient. Our software comes in many abstraction levels, and solves an open-ended set of new problems, each of which can introduce vulnerabilities. We cannot hope for perfect safety in general, even as an ideal to approach. Cooperation inherently carries risks. However, we can approach building cooperative architectures in ways that systematically reduce these risks.</p><p>The nesting of boundaries and channels is key to tremendously lowering our aggregate risk. By practicing these principles simultaneously at multiple scales, we gain a multiplicative reduction in our overall risk. To explain this, let&#8217;s visualize an approximation of overall expected risk as an attack surface. None of the following visualizations are even remotely to scale, even as approximations. Instead, they are purely to illustrate a qualitative argument about quantities we have no idea how to quantify.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tDaE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tDaE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tDaE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:134348,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tDaE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tDaE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F31ac6f3c-991c-44d4-8134-a60a671bf14b_1788x1005.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To visualize our aggregate risk, let&#8217;s start by observing that <em>expected risk</em> is just the flip side of <em>expected value</em>, where the values are negative. Anticipating automated attacks, we should assume any exploitable vulnerability is eventually exploited. So for each possible vulnerability, our expected risk is the probability that it is actually exploitable times the damage that could be done by exploiting it. To approximate these, we start with proxy measures.&nbsp;</p><p>For the possibility of an exploitable vulnerability, we substitute a fallible agent. This agent may be malicious, or may simply have a flaw letting an attacker subvert it. For the damage that could be done by exploiting a vulnerability, we substitute the valuable resources that could be damaged by that fallible agent. The red squares are where a given agent has access to a given valuable resource.&nbsp;</p><p>With the agents as rows, the resources as columns, and &#8220;has access&#8221; approximating &#8220;could damage&#8221;, this is the classic <em>access matrix</em> for analyzing access control systems. The total red surface area is our total aggregate expected risk, or <em>attack surface</em>. To become safer, we need to reduce the red surface area. To support safer cooperation, we need to remove red with as little loss of functionality as possible.</p><p>At any one scale, we can remove red by a variety of techniques. The principle of least authority removes red horizontally&#8212;it gives each fallible agent only what it needs for its legitimate duties. Placing legacy in confinement boxes lets one give that box as a whole only the external world access it needs for its confined software, again reducing red horizontally. Code reviews, testing, and especially formal verification shrink the height of each fallible agent row, i.e., its likelihood of misbehaving, thereby reducing red vertically. However, limited to one scale, even all these techniques together still leave too much red.</p><p>The two dimensional fractal Menger Sponge helps us visualize the benefits of reducing red simultaneously at multiple scales. At any one scale, only 1/9 of the red was removed, leaving 8/9. However, as we continue removing red at finer scales, the remaining surface is 8/9 times 8/9 times 8/9 etc., asymptotically approaching zero total surface area. This suggests that recursive application of our techniques could approach zero aggregate risk. Alas, even our best case is not that good.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bPv4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bPv4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 424w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 848w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 1272w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bPv4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png" width="1456" height="864" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:864,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2164202,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bPv4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 424w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 848w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 1272w, https://substackcdn.com/image/fetch/$s_!bPv4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fba7ee959-4ac0-46a7-96e2-2c9919b5e438_2048x1215.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Applying this visualization to a case study of an historic ocap system<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a>&#8212;the DarpaBrowser running on CapDesk written in the E ocap language&#8212;we get the following picture. At each level, there are some solid red boxes whose internal risk we cannot reduce at finer scales. The main solid red boxes are the confinement boxes containing legacy software. These legacy boxes appear at each scale. We can prevent confined legacy software from doing much damage to the world outside itself. But because the legacy software internally operates by its own principles, we cannot keep it from fouling its own nest. Wherever it appears, within that box the recursion stops.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7FAb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7FAb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7FAb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg" width="1456" height="953" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:953,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:230891,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7FAb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7FAb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2a2d77ea-9c06-4982-ba16-e207769e6027_1788x1170.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Even with these limitations, applying our techniques recursively reduces the <em>density</em> of the red, tremendously reducing our aggregate risk.</p><h2>Overcome Adoption Barriers</h2><p>An often neglected security vulnerability is the human. Phishing can be used to get ocaps as well as passwords. People with legitimate permissions can be tricked into misusing them. Unless there is widespread adoption of secure systems, their theoretical possibility is not much use. The adoption barrier to making the world a safer place is ignored in most abstract discussions of advanced technology attack. Perhaps because one imagines once humanity is urgently faced with these dangers, society will do what needs to be done. If there are known technological solutions for the dangers, it is natural to assume those most concerned can get a majority to build, adopt, and deploy these solutions fast enough to avert disaster.&nbsp;</p><p>As for massive cyber attacks, one would hope that the government and industry would invest in rebuilding infrastructure on more securable, user-friendly, bases. However, after seeing how weakly the world reacted to cyber attacks that revealed massive vulnerabilities, this appears to be wishful thinking. The more likely reaction to the panic following a major breach will be directing even more effort into entrenched techniques that do not and cannot work, because those are recognized best practices. Techniques that actually could be adequate if the computational world was reconstructed from its beginning, on top of those techniques, will be seen as experimental, outside of established best practice and prohibitively expensive. Instead, we will likely procrastinate with insecure affordable patches until disaster hits.</p><p>The adoption of secure computing is being delayed because the overall software ecosystem is not &#8220;hostile enough.&#8221; Companies and institutions can be too successful when they build otherwise high quality systems implemented in insecurable architectures. Small projects can free-ride on larger projects being more attractive targets. In a world where large-scale, entrenched software projects primarily get attacked, most risk to early stage software projects is due to non-security dangers.&nbsp;</p><p>Therefore, for most early projects, investing in costly security is less important than other areas, such as rapid product prototyping and receiving user feedback. Additionally, when hiring employees, a small company considers the person&#8217;s additional value to the project. So with regard to security, companies generally minimize the education burden on their team by following what are considered current best practices, rather than more unusual (and more secure) techniques.&nbsp;</p><p>If a small project engages in the same allegedly best practices as bigger projects, it can escape attack because there are bigger targets. By the time the small project becomes large, and a serious target, it has enough capital to manage the security problem without truly fixing it. As of 2022, all large corporations manage their pervasive insecurities rather than fixing them. This is only sustainable because attacks are not extremely sophisticated yet.&nbsp;</p><p>A sophisticated attack would make the world hostile enough to end fragile systems, but would also severely disrupt society. On the positive side, when this higher level of attack gets deployed, the world&#8217;s software ecosystem will become hostile enough that smaller projects&#8217; relative safety through obscurity will end because insecurable systems of all sizes will be punished early on. The downside is that this has the danger of widespread destruction of the existing software infrastructure. If a certain threshold is destroyed, it could be difficult to transition to a safer situation without a serious downturn in overall functionality of the world&#8217;s computation systems, not to mention its economy.&nbsp;</p><p>The problem is that a multi-trillion dollar ecosystem is already built on the current insecurable foundations, and it is very difficult to get adoption for rebuilding it from scratch. Thus, we should explore strategies to bridge from current systems to new secure ones. As mentioned in Chapter 5, in other contexts this is known as <em>genetic takeover</em>, a term derived from biology. In a genetic takeover, the new system is grown within the existing system, and is competitive within it. Once the new system becomes widespread enough, we can shift over to the new system, and eventually make the previous system obsolete.&nbsp;</p><p>A real-world analogy is how society has adapted to earthquake risk. Instead of requiring immediate demolition and reconstruction of an installed base of existing, unsafe buildings, rewritten building codes require earthquake reinforcement be gradually done as other renovations take place. Over time, the installed base becomes much safer.&nbsp;</p><p>Genetic takeover in the computer industry has happened before. The entire ecosystem of mainframe software rested on a few mainframe platforms, which seemed to be permanently entrenched. The new personal computing ecosystem initially grew alongside it, complementing rather than competing with the old one, but eventually mostly displaced it. Attempting to replace today&#8217;s entrenched software ecosystem is not hopeless; but it is very difficult.&nbsp;</p><h2>Grow Secure Systems in Hostile Environments</h2><p>A hopeful counterexample to the insecurable computer ecosystem is the blockchain ecosystem. The large monetary incentives to hack them invite effective red team hacker attacks, which successful systems must survive. Chris Allen summarizes this accurately: "<em>Bitcoin has a $100 billion bug bounty</em>&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a>  When insecurity leads to losses, the players have no other recourse to compensate. Non-bulletproof systems will be killed early and visibly, and therefore only bulletproof systems will populate these ecosystems.</p><p>Bitcoin and Ethereum are evolving with a degree of adversarial testing that can create the seeds for a system that can survive a magnitude of cyberattack that would destroy conventional software.&nbsp;</p><p>An analogy for the idea that bad bits can be fought with better bits can be taken from John Stuart Mill&#8217;s conception about the discovery of ideas. He did not deny that bad ideas can cause harm, but he observed that we don&#8217;t have any divine knowledge of which ideas are good and which are bad.&nbsp; The only way we discover better ideas over time is by being willing to listen to bad ideas. The only robust protection from the damage bad ideas can cause is the better ideas that immunize us from the bad ideas, by being able to understand why they are wrong. Competition between ideas in open argumentation ends up better discovering the truth over time. With information attacks, the flaw is not in the malware or the virus, but in the software&#8217;s vulnerability at the endpoint. Bad bits only damage you because you're running insecure software.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a></p><p>In this sense, even for informational communication interactions at a much lower level than ideas, messages can transmit a virus or do other harm. The answer to this harm is for the message receiver not to be vulnerable by having a different informational architecture on the receiving side. In short, instead of preventing sending damaging bits, the best answer is better bits on the receiving side&#8212;secure operating systems, programming languages, and software security all the way up.&nbsp;</p><p>Secure software systems like seL4 are better information. The more bad information or malware there is, the greater demand for a secure endpoint. Computation using ocap principles of voluntary interaction of digital architectures and alternatives discussed in this chapter can build a largely virus- and malware-safe system. There will always be new vulnerabilities and new attacks at upper abstraction levels. But the same iteration applies; these are best addressed by information receivers improving so as not to be harmed by the information. Building an ecosystem upon these security principles may result in a general-purpose ecosystem that can be used if the existing dominant ecosystem is destroyed. If a secure system grows enough before the world is subject to major and frequent cyberattacks, then we might achieve a successful genetic takeover scenario. We can potentially leave civilization a new architecture to migrate to, especially if migration starts before the destruction, or if the destruction is gradual and not sudden.</p><h2>Chapter Summary</h2><p>Physical systems rely on walls, barriers, and the rule of law to enable voluntary interactions, but digital systems have the advantage of an innate voluntarism. Ignoring the coupling with the physical world, one can only send bits, not violence, through the network, creating a fundamentally voluntary starting point for building architectures. Nevertheless, in the absence of computer security, the sent bits can still be catastrophically damaging.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-28" href="#footnote-28" target="_self">28</a> Voluntarism itself doesn&#8217;t mean risk-free cooperation. Computer security experts still put their machines on the internet, making a calculation that the benefits of cooperation by interacting there are worth the risks. Ramping up computer security can reduce risks and create a much more cooperative world.&nbsp;</p><div id="youtube2-2Z30hyOsXuY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;2Z30hyOsXuY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/2Z30hyOsXuY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Sound interesting? Watch <a href="https://foresight.org/salon/gernot-heiser-sel4-formal-proofs-for-real-world-cybersecurity/">Gernot Heiser&#8217;s seminar on applications of cybersecurity</a>.</p><h3>Next chapter: <a href="https://foresightinstitute.substack.com/p/new-players">WELCOME NEW PLAYERS | Artificial Intelligences</a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://theprecipice.com/">The Precipice</a> by Toby Ord.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://arxiv.org/pdf/1702.06162.pdf">Survey of Automated Vulnerability Detection and Exploit Generation Techniques in Cyber Reasoning Systems</a> by Teresa Nicole Brooks.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://www.amazon.com/Cyber-War-Threat-National-Security/dp/0061962244">Cyber War</a> by Richard A. Clarke.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/crs-lloyds-business-blackout-scenario.pdf">Business Blackout: The Insurance Implications of a Cyberattack on the US Electric Power Grid</a> by Lloyds &amp; Cambridge Centre for Risk Studies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>In <a href="https://www.goodreads.com/book/show/33369264-blackout">Blackout</a>, Marc Elsberg gives a terrifying fictional, yet illustrative depiction of what kinds of collateral effects and potential for violence an extended power outage may bring.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><a href="https://us-cert.cisa.gov/ncas/alerts/aa20-352a">Advanced Persistent Threat Compromise of Government, Critical Infrastructure, and Private Sector </a>by the U.S. Cybersecurity and Infrastructure Security Agency.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><a href="https://us-cert.cisa.gov/ncas/alerts/aa21-200b">Chinese State-sponsored Cyber Operations</a> by the U.S. Cybersecurity and Critical Infrastructure Agency.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>In <a href="https://www.sciencedirect.com/science/article/pii/S0016328720300604">Fragile World Hypothesis</a> David Manheim makes the compelling case that, even without any proactive cyber war, just the gradually deteriorating insecurable computer infrastructure on which contained dangers such as nuclear weapons rely, could be sufficient to cause significant damage.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><a href="https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/MaliciousUseofAI.pdf?ver=1553030594217">Malicious Uses of AI</a> by the Future of Humanity Institute.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p><a href="https://dl.acm.org/doi/10.1145/2692915.2628165">Using Formal Methods to Enable More Secure Vehicles: DARPA&#8217;s HACMS Program </a>by Kathleen Fisher.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Trusted Platform Modules.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Hardware Security Modules.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p><a href="https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/">Capability Hardware Enhanced RISC Instructions</a> by Robert Watson, Simon Moore, Peter Sewell, Peter Neumann.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p><a href="https://web.archive.org/web/20140403134451/http://asmarterplanet.com/blog/2014/03/open-letter-data.html">A Letter to Our Clients About Government Access to Data</a> by Robert Weber. <a href="https://time.com/25410/ibm-nsa-letter/">IBM: We Haven&#8217;t Given Any Client Data</a> by Sam Gustin.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p><a href="http://www.impedimenttoprogress.com/storage/publications/A2_SP_2016.pdf">A2: Analog Malicious Hardware</a> by Kaiyuan Yang, Matthew Hicks, Qing Dong, Todd Austin, and Dennis Sylvester.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p><a href="http://www2.econ.iastate.edu/tesfatsi/ArchitectureOfComplexity.HSimon1962.pdf">Architecture of Complexity</a> by Herbert Simon. <a href="http://cognet.mit.edu/book/signals-and-boundaries">Signals and Boundaries</a> by John Holland.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p><a href="https://research.google/pubs/pub33037/">Delegating Responsibility in Digital Systems</a> by Mark Miller, Alan Karp, Jen Donnelley. <a href="https://www.youtube.com/watch?v=NAfjEnu6R2g&amp;list=PLzDw4TTug5O0ywHrOz4VevVTYr6Kj_KtW&amp;index=23&amp;t=714s">Architectures of Robust Openness</a> by Mark Miller.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p><a href="http://www.erights.org/history/morris73.pdf">Protection in Programming Languages</a> by James Morris. <a href="http://mumble.net/~jar/pubs/secureos/">A Security Kernel Based on the Lambda-Calculus</a> by Jonathan Rees. <a href="https://www.hpl.hp.com/techreports/2006/HPL-2006-116.html">How Emily Tamed the Caml</a> by Marc Stiegler and Mark Miller.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p><a href="https://digitalassets.lib.berkeley.edu/etd/ucb/text/Mettler_berkeley_0028E_13045.pdf">Language and Framework Support for Reviewably-Secure Software Systems</a> by Adrian Mettler. <a href="http://www.cs.cmu.edu/~aldrich/papers/ecoop17modules.pdf">A Capability-Based Module System for Authority Control</a>. Darya Melicher, Yang Qing Wei Shi, Alex Potanin, and Jonathan Aldrich.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p><a href="https://dl.acm.org/doi/10.1145/2824815.2824816">Deny Capabilities for Safe, Fast Actors</a> by Sylvan Clebsch et. al.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p><a href="https://www.researchgate.net/publication/221091272_The_Oz-E_Project_Design_Guidelines_for_a_Secure_Multiparadigm_Programming_Language">The Oz-E Project: Design Guidelines for a Secure Multiparadigm Programming Language</a> by Fred Spiessens and Peter Van Roy. <a href="https://www.researchgate.net/publication/221321646_Actors_as_a_Special_Case_of_Concurrent_Constraint_Programming">Actors as a Special Case of Concurrent Constraint Programming</a> by Ken Kahn and Vijay Saraswat.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p><a href="https://papers.freebsd.org/2010/rwatson-capsicum/">Capsicum: Practical Capabilities for UNIX </a>by Robert Watson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p><a href="https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201505-oakland2015-cheri-compartmentalization.pdf">CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization</a> by Robert Watson, et. al.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p><a href="http://www.wirfs-brock.com/allen/posts/866">Javascript: The First 20 Years</a> by Allen Wirfs-Brock, <a href="https://research.google/pubs/pub37199/">Automated Analysis of Security-Critical JavaScript APIs</a> by Ankur Taly et. al., <a href="https://research.google/pubs/pub40673/">Distributed Electronic Rights in Javascript</a> by Mark Miller, Tom Van Cutsem, Bill Tulloh. Many elements of modern JavaScript&#8212;promises, strict mode, freezing, proxies, weakmaps, classes&#8212;are due to this effort.These elements support <em>SES</em>, a secure ocap system that embeds smoothly in standard JavaScript. The <em>SES shim</em> is a library that builds a SES system within any standard JavaScript. SES itself is a proposal advancing through TC39, and is the basis for TC53&#8217;s JavaScript standard for embedded devices. Moddable&#8217;s XS is a specialized JavaScript engine built to run TC53-compliant SES on devices. Agoric runs SES as a secure distributed persistent language (Endo on SES). This distributed system includes the Agoric blockchain, using SES on XS as a smart contract language.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p><a href="https://www.youtube.com/watch?v=g28yRvHKIgc&amp;list=PLzDw4TTug5O0ywHrOz4VevVTYr6Kj_KtW">Towards Secure Computing</a> by Mark Miller. <a href="http://www.combex.com/papers/darpa-review/security-review.html">Security Review of the Combex DarpaBrowser Architecture</a> by Dean Tribble, David Wagner. <a href="http://combex.com/papers/darpa-report/index.html">Darpabrowser Report</a> by Marc Stiegler, Mark Miller. <a href="http://www.erights.org/talks/no-sep/secnotsep.pdf">The Structure of Authority</a> by Mark Miller, Bill Tulloh, Jonathan Shapiro.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>In personal communications.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p><a href="https://www.youtube.com/watch?v=kOFzisF7aNw">Computer Security as the Future of Law</a> by Mark Miller.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-28" href="#footnote-anchor-28" class="footnote-number" contenteditable="false" target="_self">28</a><div class="footnote-content"><p>How to deal with bits that are damaging on a social, economic, or psychological level in our idea-space is discussed in chapter 3.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[7. DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields ]]></title><description><![CDATA[Previous Chapter: GENETIC TAKEOVER | Cryptocommerce]]></description><link>https://foresightinstitute.substack.com/p/defend-physical</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/defend-physical</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:24:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/fX_UKfVxT10" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous Chapter: <a href="https://foresightinstitute.substack.com/p/genetic-takeover">GENETIC TAKEOVER | Cryptocommerce</a></h3><p></p><p>Historically, violence has been declining while we have made progress on a variety of factors that humans may value. Cryptocommerce can help us cooperate better in advancing progress. Biotechnology is unlocking healthier lives for more people, while remote sensing and robotics enable production at lower cost, using less energy, and creating less waste. Soon, we may be able to produce entirely new materials with molecular precision. We need technological progress to continue enabling more and more valuing entities to achieve what they value. So far, so good. Now let&#8217;s look at the dark side. Technology is raising the stakes of our civilizational game, and could threaten its very existence. There are at least two traps to steer away from.</p><h1>Avoid Trap 1: &#8216;Small Kills All&#8217; Via Technological Proliferation</h1><p>Technologies are becoming cheaper and more widely available. CRISPR developed from the biotechnology achievement of the decade to being used routinely by lab students. Democratization of powerful technologies also means a proliferation of dangers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Nuclear weapon development is a large undertaking employing special materials to make distinctive products. In contrast, the cost to chemically manufacture strands of DNA could soon be affordable to the point of printing targeted deadly proteins or cells at home.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Further out, atomically precise manufacturing could allow ordinary humans to employ ordinary materials to create entirely unexplored military applications.&nbsp;</p><p>The risks from increasingly widespread weaponizable technology are sometimes summarized under the dynamic of &#8220;small kills all&#8221;, i.e. fewer people can cause ever greater destruction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> It may only be a question of time until one of the 7 billion active players seeks large-scale civilizational destruction and can acquire the technology to cause it. To understand the danger, let&#8217;s zoom in on a few risky technological dynamics from nukes, to robotic violence, to biotechnology and nanotechnology.&nbsp;</p><h2>Dangerous Dynamics&nbsp;</h2><h3>Nuclear Weapons</h3><p>Nukes are a great example of&nbsp; how we historically fared with massive risks for large-scale violence. The protections against an accidental nuclear launch were terrifyingly weak, while the degree of lying to the public about the safety controls in place was terrifyingly high.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Public opinion was also frightening. In a 1945 Fortune poll on US attitudes about the atomic bomb, more than half of respondents, knowing about the devastating consequences for Japan, agreed &#8220;<em>we should have used the two bombs on cities just as we did</em>&#8221;, and almost a quarter agreed &#8220;<em>we should have quickly used many more of them before Japan had a chance to surrender</em>&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>&nbsp;</p><p>FLI&#8217;s Unsung Hero Award acknowledges that it may only be due to a few individuals that we lived to tell the tale.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> In 1983, Stanislav Petrov ignored the Soviet early-warning detection system that had falsely detected five incoming American nuclear missiles. This prevented an all-out US-Russia nuclear war. Vasili Arkhipov vetoed his submarine captain&#8217;s decision to launch nuclear weapons at US ships on the false assumption that they were under attack. This was at the height of the Cuban Missile Crisis, so we can assume Arkhipov single-handedly averted a new nuclear war. The degree of governmental secrecy, the willingness to use nuclear strikes, and the accidental close calls is worrying with respect to future warfare. Nukes are still dangers hiding in plain sight.&nbsp;</p><p>Fifty years since the 1970 Nuclear Non-proliferation Treaty - in which non-nuke-holding nations agreed to not proliferate while nuke-holding nations agreed to pursue disarmament - five authorized nuclear weapons states still have more than 13000 warheads in their combined stockpile.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> The Intermediate Range Nuclear Forces Treaty between the US and Russia has ended and North Korea is making progress toward a nuclear-tipped intercontinental ballistic missile that could reach LA in 30 minutes. The fragile security infrastructure around nukes is withering away, making accidents more likely with every year that goes by.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> We have not been safe from catastrophic weapons for a long time. It is thanks to outstanding individuals and blind luck that we have survived thus far. For civilization to survive, we shouldn&#8217;t rely on those factors.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a>&nbsp;</p><div id="youtube2-fX_UKfVxT10" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;fX_UKfVxT10&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/fX_UKfVxT10?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to Daniel Ellsberg, the author of Doomsday Machine, <a href="https://foresight.org/salon/nuclear-risks-doomsday-still-hiding-in-plain-sight-daniel-ellsberg-author-of-doomsday-machine/">discuss nuclear risks.</a></p><h3>Robotic Violence</h3><p>Automation of cooperative arrangements will lower their cost and increase cooperation. Automation will also be the main multiplier for our ability to cause violence by lowering its costs. Robotic weapons, drones controlled by far-away individuals, and self-driving cars turned into land-based missiles are all weaponizable physical technologies. A single trigger can cause an automated reaction resulting in many deaths. Tiny autonomous drone swarms may be programmable to assassinate victims based on their biological features.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a>&nbsp;</p><p>In response to activism, Google and some other large corporations agreed to not cooperate with governments in building autonomous military drones. But many of these companies are still working towards full self-driving automobile software as fast as they can. While self-driving cars are not built to kill people, they are built on insecure operating systems. If adversaries can take over millions of cars and run them into crowds at high speed, these cars become lethal land-based drones. In addition to technology whose primary purpose is violence, we are surrounded by unsecurable technology which could be used as weaponry.&nbsp;</p><p>Importantly, robotic violence is not constrained by AI. We smile at naive media depictions of the Terminator robot. But, in terms of destructiveness, those robots aren&#8217;t a long way from current Boston Dynamic robots. Make those robots a factor of ten cheaper, more battery efficient, and mount a machine gun on them. The same robots that do adorable dances in today&#8217;s promotional videos become a means for automated destruction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a>&nbsp;</p><h3>Biotechnology Risks</h3><p>Biotechnology is responsible for some of human health&#8217;s biggest successes, such as antibiotics and vaccines. The response to the COVID-19 pandemic, with a record time of a few days to a vaccine design, highlights the imperative to advance biotechnological progress. At the same time, the pandemic gave a taste of the potential magnitude of anthropogenic bio risks as existential risks to humanity. Within a few centuries, and recently accelerated by breakthroughs like CRISPR, we went from figuring out what pathogens are made of to creating new, genetically modified viruses with different properties.&nbsp;</p><p>Toby Ord is particularly worried about gain-of-function research, which makes pathogenic strains of infectious agents with higher transmissibility, lethality, or vaccination resistance. For instance, avian influenza A/H5N1 virus can be very dangerous to us, but its natural version is not airborne transmissible between humans. Nevertheless, in 2011, two laboratories identified an avian influenza variant which became transmissible through the air between ferrets. Based on this research, shortly after point mutations of the H5N1 virus genome were screened to identify mutations which would allow airborne spread. Gain-of-function research is aimed at improving pandemic prevention and defense.&nbsp;</p><p>But our poor biosafety track record is especially concerning with respect to agents designed for optimal lethality. Ord draws attention to the Dugway Proving Grounds case. Dugway was established by the US military to work on chemical and biological weapons. In 2015, it accidentally distributed samples containing live anthrax spores, instead of the expected inactivated spores, to US labs across eight states. In response to such accidents, the US placed a moratorium on this research that is now rescinded.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> Terrifying biosecurity accidents regularly happen all over the world and the COVID-19 pandemic showed how quickly pathogens can spread globally.</p><p>In addition to critical accidents, we should also worry about deliberate threats. Pathogens could be engineered to be vastly more lethal and transmissible than SARS-CoV-2 and similar viruses. More than 1,200 different kinds of potentially lethal bio-agents, including bacteria, viruses, and parasites, have been studied for use as biological weapons.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> As with the history of biological accidents, governments currently have the greatest capacity for deliberate abuses. There are international treaties guarding against bioweapons use, such as the 1972 Biological Weapons Convention. It prohibited their development, production, acquisition, transfer, or stockpiling. Nevertheless, we lack the means to monitor compliance on either a national or international level. Some believe North Korea has compiled a biological arsenal containing anthrax, bubonic plague, smallpox, and yellow fever.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> Likewise, as shown in the Dugway case, the US is clearly not twiddling their thumbs when it comes to the exploration of biological weapons.</p><h3>Nanotechnology Risks</h3><p>At our current rate of scientific breakthroughs, we stand ready to unlock radically novel technologies. If Richard Feynman is correct that &#8220;<em>the principles of physics [...] do not speak against the possibility of maneuvering things atom by atom</em>&#8221;, one breakthrough will be molecular nanotechnology.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> This future level of nanotechnology involves having atomically-precise control of the structure of matter. Starting from current chemical synthesis, we may develop progressively better nanosystems capable of macromolecular self-assembly. The long-term goal is <em>atomically precise manufacturing</em>, i.e., the ability to use coordinated nanosystems operating with atomic precision to produce macroscale products with unprecedented performance.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a><em>&nbsp;</em></p><p>Assembling materials from the bottom up would allow us to make any physically possible structure at minimal cost or waste, similar to an incredibly precise 3D printer. Nanoscale medical applications could help repair and rejuvenate our bodies. Nanoscale processing of materials could boost material welfare. Artificial molecular machines could target threats to nature's molecular machinery, reversing damage to our planet&#8217;s ecosystem. Combined with information technology, new materials could host entire networks of sensors, actuators, and communication devices. Summed up, &#8220;<em>asking what atomically precise manufacturing systems can do with materials is much like asking what computers can do with informatio</em>n.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a></p><p>While this level of nanotechnology is likely still many years off, it will come with substantial risks. In addition to boosting existing weapons&#8217; performance and enabling miniature mobile systems, it may lead to new types of weaponry. With desktop-scale nanofabs, separating uranium isotopes could come at the ease of today&#8217;s 3D printing. Swarms of billions of insect-sized drones could communicate, monitor their surroundings, and act in a coordinated fashion. Our inexperience with nanotechnology threats is especially worrying due to our poor historic track record of handling much simpler technologies such as nukes.&nbsp;</p><h1>Avoid Trap 2: &#8216;Civilization Suicide&#8217; Via Single Point of Failure&nbsp;</h1><p>We should be terrified of a scenario in which technological proliferation allows individual malicious actors to cause global destruction. To deal with such risks, it can seem tempting to explore solutions that prop up powers of governments - or even establish a world government - in the search for safety. </p><p>The initial appeal of an actor with global reach comes from the sobering observation that, unless we can effectively monitor and prevent risks globally, there will always be pockets of actors that can go rogue, killing everyone. To mitigate this danger, a world government would have to orchestrate the monitoring of every individual in contact with potentially dangerous technology. When a threat is detected, it would have to orchestrate its rapid elimination. This would mean equipping such a government with an unprecedented level of surveillance and physical military weaponry.&nbsp;</p><p>Unfortunately, powerful large-scale actors in charge of world-destroying technologies may not mitigate the risks from rogue individuals. After all, they are also made up of individuals. The risk of malicious individuals causing world destruction is still present. In fact, if the organizational structure amplifies the reach of the large number of individuals making up the actor, it may further amplify their malice. According to adverse selection, whenever we create an opportunity to have power, we create competition for it. The greater the power, the greater the race to capture it, and the less likely that competitors will all have purely good intentions. </p><p>Even in a best case where the centralized organization could filter out any bad actors it attracts, it is in danger. If the &#8220;good guys&#8221; get to the top, they become a target for the &#8220;bad guys&#8221; who seek to take over. As long as individual actors in a world government are vulnerable to external extortion, a benign world government would be vulnerable to it.&nbsp;</p><p>The development of nuclear weapons technology was hastened because large governments that felt threatened thought having them would improve their security. Currently, we have a sufficient balance of power among multiple nuclear powers, and all of them have so far succeeded at making decisions that did not result in their use against an adversary. If we had a world-dominating central body in the possession of nukes that feels their existence is threatened, would this be a world that is more or less likely to deploy nukes than our current world? Hanson warns that, by creating a single point of failure, such a power&#8217;s suicide could quickly become a civilizational suicide.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a><em>&nbsp;</em></p><p>Even if we could imagine a perfect central actor without those critical dangers, how could we possibly hope to create it? We are not in such a world and, for reasons just discussed, not everyone thinks it would be a good idea. Actors with good reasons to try to prevent a global takeover by a world government may decide to attack first to avoid otherwise becoming the victim of such a strike. Simply the fact that we have a multipolar deployment of nukes internationally already narrows the set of possible survivable futures. It could be fatal to try to transition to a unipolar world, or even to let things proceed to the point where a sudden transition to a unipolar world is plausible.&nbsp;</p><p>The temptation to create powerful central coordination, for instance to solve <em>small kills all</em> risks from the proliferation of technologies, is Robin Hanson&#8217;s best guess for the <em>Great Filter </em>that humanity may face. The Great Filter is one explanation as to why we have not found alien life even though, by the sheer number of potential life-carrying substrates, and the time it would have to contact us, we should have. There may be a Great Filter all life has to pass to evolve into an interstellar species. No-one other life form has passed it yet, which is why we don&#8217;t see anyone. Either this filter is behind us, such as abiogenesis (life arising from non-life) in which case our chances to survive are favorable. But it may be ahead of our civilizational path, such as a risk that could wipe out humanity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> It would be sad, if by attempting to protect civilization, we would end up preemptively destroying it.</p><h2>Dangerous Dynamics</h2><h3>Pervasive Surveillance&nbsp;</h3><p>In the past, certain actors have tried to take over the world. Some made a certain amount of progress but ultimately failed. As technology advances, they may not. Technologies for direct monitoring and control of a population are becoming available, becoming cheaper, and coming to the attention of authoritarian states everywhere. Even in relatively free countries, we must assume any private information held by companies or governments is accessible by sophisticated hackers from both our own and other unfriendly nation states.&nbsp;</p><p>Google&#8217;s &#8216;Don't Be Evil&#8217; dynamics led to an internal collective sense that employees would not perform evil actions. Yet those good employees let Google gather a very dangerous amount of aggregate information on people in one vulnerable place. There were multiple attempts to move Gmail from plain text to encrypted ciphertext but they never succeeded. Even if Google does not abuse its power, its existence is a tremendous temptation. For Google, external abuse has occurred both when it was hacked by a foreign nation-state and when it was served with U.S. national security letters, demanding its customers&#8217; information. Such national security letters are unconstitutional and a severe violation of democratic accountability.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a> But they still happen, because having many people's plain text email at one company is too juicy a target. We must prevent situations occurring in which those who are not evil can still be used for evil.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a></p><p>Widespread surveillance may become more common. Individuals voluntarily install surveillance in their homes, wear sensors on their body, and carry a tracked pocket-sized computer that can perform its own external surveillance. The smaller and cheaper sensors get, the harder it will be to know if someone is equipped with monitoring sensors. You may even be unaware that someone has put such sensors in your clothes or your body. Information that becomes public will remain public, waiting for steadily-improving Machine Learning-supported correlation to make sense of it. The importance of knowledge will lead to races to obtain knowledge, improve sensors, and perform more powerful surveillance.&nbsp;&nbsp;</p><h3>Robotic Enforcement</h3><p>A similar trend drives the automation of physical enforcement. Using humans as violent peace enforcers is too expensive, especially relative to the cost of robotic enforcement. The U.S., today&#8217;s dominant military power, already uses drones for military purposes. A human being is in the control loop, but that person has no physical &#8216;skin in the game&#8217; in any&nbsp; battles. And nothing ensures humans will stay in those control loops, especially when the stakes are raised. A 2020 Libyan drone airstrike was already conducted with no human in the loop.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a>&nbsp;</p><p>Despite our best efforts, in a shooting war, formerly unethical actions may be reconsidered and people do what they think is necessary. At that point, how corruptible the rules guiding robotic enforcement are becomes a very present danger. Corruption can be vastly more amplified by automated enforcement mechanisms than by human ones. The Nuremberg Trials and reactions to the Milgram Experiment helped create a norm that if we receive an order that is illegitimate by society&#8217;s current moral code, we are individually responsible for not obeying it. The friction in human enforcement, critical to our safety, is not present for automatic enforcement. It will take longer to build a system with built-in friction against corrupt orders. The path of least resistance is to deploy systems before we know how to include such friction.&nbsp;</p><p>The dangers of transitioning to pervasive automated enforcement through robotics are coming. Our failure to put adequate controls in place with respect to nukes is frightening when we project it onto this much more incremental, less visible threat. A robotic takeover could happen by an existing power gradually expanding its surveillance or military force. This is a boiling frog scenario. Democracy and the constraints against totalitarianism in the U.S. are much weaker than we thought. If the vulnerabilities of democracy in the still-dominant superpower are a cause for concern, so is the absence of accountability in its rising rival, China. If rapid deployment of a technological advantage in surveillance or enforcement technologies results in a global regime, such a single actor would create an existential risk in itself.</p><h1>Decentralize Defense: Multipolar Active Shields</h1><p>We need a civilization that decreases our vulnerability to the <em>small kills all</em> trap without going to the opposite extreme and creating the <em>power suicide</em> trap. For a moment, let&#8217;s dream big, and design a possible system that, even if not easy to build, if successful would actually address the risks from biotechnology, nanotechnology, and robotics without creating single points of failure.</p><p>First, we need to prevent <em>small kills all </em>attacks from succeeding, even when we can&#8217;t stop them. To successfully defend against an attack we must have a deployed fabric of systems that detect and react to attacks based on trustworthy mechanisms. This is called an <em>active shield</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a> In theory, widely deployed sensors could collect data in an encrypted format about relevant technologies usage. An automated protection system could analyze the data and enforce the action required to prevent an abuse of the technology. If we cannot prevent robotic enforcement, we must ensure it is verifiably used for legitimate defense only. Similar to the white blood cells of our immune system, such defense machines would fight a variety of dangerous replicators, decreasing in scale as our threats do.</p><p>Second, we need to prevent active shield deployments from creating the conditions for <em>power suicide</em>. The rules that govern it need not only stop those that they monitor from engaging in violence but also keep the monitoring components in check. For that, the system must be multipolar so that the different components monitor each other, and if one component goes rogue, the rest of the system must be able to gather enough force to counter the component that went rogue. Instead of having a mutually trusting system of active shields, we need a mutually suspicious system of active shields. This is a resilient way of building an overall system that creates a stable enforcement of the rule of law without assuming that any one component of the system is incorruptible.&nbsp;</p><p>We should be aghast at the degree of surveillance and enforcement considered here and must do what we can to fight it. But if we can&#8217;t stop it, we must face the terrifying choice of what type of system it is and how to control it. Let&#8217;s look at three features a multipolar active shield should have:</p><h2>Monitor: Encrypt Sousveillance</h2><p>Successfully monitoring for hostile technology activity requires an unprecedented level of surveillance. On the one hand, we are lucky that robots, nukes, biotech, and nanotech (in contrast to cyber threats and AI which we tackle in the next chapter) involve physical aspects, so there is something physically observable. On the other hand, the physical processes we need to monitor to distinguish dangerous biotech or nanotech uses from benign ones are extraordinarily small. This suggests the need for almost unimaginable&nbsp; surveillance levels.&nbsp;</p><p>Compare this to our most recent precedent, nukes. While their current situation is still very concerning, no one has used them in battle since World War II. This is partly due to luck and partly due to non-proliferation treaties backed by monitoring regimes. We are fortunate that nuclear weapons are a very monitorable gross physical phenomenon. The average person has no need to privately traffic in Uranium-238 or engage in other activities with strong nuclear signatures. Likewise with nuclear tests, people don't have important private reasons to engage in activities that look like a nuclear test. Monitoring for nuclear explosions is not very intrusive to people's private lives.&nbsp;</p><p>The physical objects monitored to detect hostile nuclear weapons activity are large-scale and easy to verify. Particularly when compared to what is needed to monitor for future offensive biotechnology and nanotechnology use. Those involve molecular machines, so require verification at the molecular level: a daunting challenge. To prevent these hostile attacks, we need to monitor activities in small-scale labs that do small-scale manipulation of widely deployed synthesis mechanisms that are otherwise general-purpose. Given that the amount of weaponry one can make in a small building will be substantial, we may not be able to preserve our homes as privacy fortresses.&nbsp;</p><p>Deploying such pervasive physical monitoring without strong privacy safeguards could create dangers in excess of what we want to avoid. It could lead to a Big Brother-style top-down pervasive surveillance state that makes <em>1984</em> pale in comparison.&nbsp; There are other options. David Brin discusses a theoretical alternative by which top-down surveillance is kept in check by bottom-up sousveillance.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a> He suggests that given future tech-enabled affordable mosquito-size cameras, the only way to prevent pervasive surveillance is via pervasive sousveillance, or upward-looking monitoring.&nbsp;</p><p>In this scenario, all information is simply public, so that for all physical activity in the public realm, all of us have complete access to all of it. The information produced by pervasive sousveillance lets different entities keep each other in check, thereby stopping corruption. Sousveillance could hold abuses in check but would be destructive of privacy. Even if we could adopt our privacy norms to match such a transparent society, it&#8217;s not clear that we should. Even if everyone can, in theory, see everyone else equally, the costs of analyzing information are asymmetric. Apart from extortability of individuals, the bigger danger is the power advantage this information confers long-term that can destabilize the balance.</p><p>A preferred monitoring option is an automatic network of artificial monitoring agents, from which information is not revealed to humans unless pre-agreed criteria are triggered. Ben Garfinkel uses the example of bomb-sniffing dogs, which report only if a given bag holds explosives. He suggests there is no reason why monitoring security-relevant information requires the system to learn anything else. While tricky to implement, an encrypted sousveillance fabric&#8212;that only releases information when it detects anomalies&#8212;could avoid top-down surveillance abuses and the loss of privacy from transparent sousveillance. Such a system, while hard to construct, could be the beginning of a path to safety.&nbsp;</p><h3>From ThinThread to Encrypted Sousveillance</h3><p>An early experiment with encrypted surveillance originates with William Binney, then senior official at the NSA. The NSA collects and stores unencrypted data, but in theory limits the access of individual analysts to this information. An analyst, for example, may be allowed to make only a limited set of queries to the database and only view the portion of records that are classified as matching these queries. From the Snowden revelations we know how little self-restraint was actually practiced by the agencies. Congress set up a set of laws that the NSA was supposed to obey but the internal judicial procedure became a rubber stamp. Almost all of the requests for information that the NSA took to the FISA Court were approved, amounting to a window-dressing of checks-and-balances.</p><p>During the crucial period of the emergence of this surveillance capacity, William Binney tried to construct an internal automated system with some degree of built-in governance. <em>Thinthread</em> was a prototype of an encrypted monitoring system that would only allow data on individuals to be decrypted if a judge found probable cause to believe the target was connected with serious crime. Such internal controls could have enabled a mutually monitoring human system, aided by FISA Courts and oversight by Congress. It would have still been corruptible without democratic accountability but it would have been a first step toward a system whose automated governance regime prevents uncontrolled access to surveillance information. Unfortunately, the program was canceled and eventually replaced by a system without the filtering and encryption protections.&nbsp;</p><blockquote></blockquote><p>Given recent progress in AI and cryptography, it is time to make another attempt at making monitoring privacy-preserving. In theory, if a surveillance task can be automated, then it can be done in a way that avoids requiring the party to collect the data in unencrypted form. There are existing facial recognition system that use homomorphic encryption to report only whether an image contains the face of a suspect. From there, it is not far to imagine future systems that report the identities of individuals only if they detect illegal activity with high probability. This comes closer to establishing a tripwire system in which encrypted information only gets revealed if it triggers certain pre-agreed identifiers. Using zero-knowledge proofs, it may be possible for the monitoring fabric to prove that its information gathering satisfies some agreed-upon tripwire criteria without revealing anything that it is not supposed to reveal.&nbsp;</p><p>The hard part will be designing a multipolar monitoring system in which different agents reliably keep each other in check. Illegitimate release of information is not visible abuse that the other monitoring actors can detect. In the case of physical coercion, other watchers could see the involuntary interaction performed by bad watchers and treat them as an attacker. The case of illegal release of information is harder to monitor because it has to be done by internal inspection.&nbsp;</p><p>Any privacy-preserving system is non-transparent by virtue of the fact that it is observing information that it is not revealing. If it is non-transparent, it is hard to reliably prove that it is not revealing the information in a way that is abusable. Such double non-transparency is a high bar, but it is needed if we want to trust that our information is neither publicly nor privately revealed.&nbsp;</p><p>Fortunately, zero-knowledge proofs, secure multiparty computation, homomorphic encryption, differential privacy and other encryption and privacy efforts are progressing rapidly. More funding is needed to speed up better tool development. The temptation to corrupt monitoring fabrics is going to be enormous. We must avoid that the fear of the dangers and the promise of privacy preservation lead us to lower our guard to the point that we allow abusable monitoring to get deployed. We will come back to this when discussing computer security in chapter 8.&nbsp;&nbsp;</p><h2>Detect: Design Ahead&nbsp;</h2><p>Let&#8217;s assume we successfully design an encrypted multipolar monitoring fabric. How do we define a dangerous activity to monitor for? Nick Bostrom compares the process of civilization engaged in technological discovery to pulling balls out of an urn. He suggests that we need a system that ensures we don&#8217;t pull out any &#8220;black balls&#8221;, i.e. technologies that cause disaster instead of progress. What if we pull out a ball that kind of looks gray to us, while others see it as more silver? As we start playing with it, it may get dirty and turn darker and darker until someone speaks up and calls it a black ball.&nbsp;</p><p>When someone discovers such a civilization-destroying potential of a technology, we need an open system in place that is effective at problem-solving around the threat. If we have the complex superintelligence of civilization, with a strong interest in preserving the peaceful decentralized fabric of cooperation, then we can bring this intelligence to the problem. We need to make our cooperative architectures more reliable, more accountable, and more widely reviewed, so more of us can know how much danger we are in, and give feedback.</p><p>As a precursor of such a system, let&#8217;s take inspiration from Robert Reid&#8217;s proposal for pandemic preparedness. He suggests that a global cooperative layer of scientists, equipped with local knowledge of the viral patterns in their area could create open source weather maps of dangerous pathogens.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a> Our world&#8217;s complex adaptive systems have millions of trade-offs on millions of constantly changing margins. We must resist the temptation of creating centralized organizations tasked with solving the problem. This reduces the intelligence that gets to be applied to that problem. If complex systems lose their adaptiveness to high-level planning, civilization loses its ability to adapt.&nbsp;</p><h3>The Oracle Problem</h3><p>This is especially pressing when we increasingly rely on automated monitoring systems. They turn a question about the real world into a decision procedure about electronically judgeable evidence that supposedly represents a claim about the real world. When all that these systems can draw on is evidence brought to them by sensors in the real-world, how can we trust the outcome? The <em>Oracle Problem</em> is especially pressing in situations in which not all sensors can be assumed as being well-meaning.</p><p>Fortunately, it only becomes a pressing problem once we have already solved all other problems. The Oracle Problem has become a concern in the blockchain world only because the ecosystem managed to advance to a point where this problem arises. This is a tremendous achievement, and experimentation with solutions in the blockchain space should inform our thinking moving ahead.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a> Noting the problem means that we can get more minds working on it.&nbsp;</p><p>Fortunately, we have ample opportunity to learn along the way. Some technologies may allow for a window in which it is possible to design ahead and simulate them well before actual construction is feasible. For instance, advanced nanotechnology can and likely will be simulated well before it can be implemented. This time gap between knowing what is buildable and being able to build it creates room to increase safety. In addition, biotechnology is sometimes classified as the biological subset of the category of molecular machine systems referred to as nanotechnology. This means that in addressing near-term biotechnology dangers we will pick up strategies applicable to the longer-term challenges of nanotechnology. It&#8217;s not too early to start. </p><h2>Defend: Open Arms&nbsp;</h2><p>Let&#8217;s imagine we succeed at creating an automated multipolar monitoring system that can detect dangerous activity while preserving privacy. Next we need to design the enforcement mechanism that is activated if illegal activity is detected. Automated contracts themselves cannot intervene directly in physical reality. This is problematic because an enforcement mechanism has to be based in physical reality, for instance via robotics. If we think of monitoring as the sensory side of smart contracts, we need to combine it with the motor side of smart contracts that can turn decisions into engaging with the world.</p><p>Similar to the monitoring fabric, the logic of the enforcement fabric must be designed as an open source system. Knowledge transfer is in everyone&#8217;s interest, similar to when the U.S. unconditionally offered other nuclear parties its inventions for protecting against rogue launch. If any one player can prevent a false launch, it is better for everyone. Once we agree on the rules by which the system operates, and have simulated and tested them, we should strive for a simultaneous multipolar deployment by all parties. If any one portion of the active shield makes use of its enforcement mechanism illegitimately, the rest of the fabric can use its aggregate power to shut down the part of the network that operated illegitimately.&nbsp;</p><p>The closest current analog to the kind of open source innovation required, termed <em>Open Arms</em>, is perhaps large multi-way consensus mechanisms on the blockchain. A large chain, such as Ethereum, faces the problem of having to design a single set of incorruptible rules to be enforced in a multipolar manner. Even though all of the participants in the mechanism are corruptible, there is a mutual checking of each other through the blockchain replication mechanism that should make the system incorruptible.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a>&nbsp;</p><h3>Drones with Body Cams&nbsp;</h3><p>What would such a system look like in practice? While automated enforcement is still far in the future, we can already make decisions about the predecessors to these technologies. One such decision is demanding the analog of body cams in emerging enforcement machines. Body cams result in making corrupt behavior harder to hide and thus less likely to occur and making proper behavior that has an ugly outcome easier to defend. They both hurt bad cops and help good cops.&nbsp;</p><p>Every automated system with the capacity to kill people, including existing robotics such as drones, should have a body cam built into it. Such a black box recorder would contain all footage of what happened leading up to a fatality. If drones are acting lawfully, there is little ground to resist a time-delayed revelation of the information in the black box, i.e. the footage becomes public after a proper time window. Twenty years later the footage will not be significant to intelligence operations. Closing the monitoring feedback loop to hold bad actors accountable may require less than twenty years, but it is a lot better than never. If we can get twenty years agreed upon now, in ten years we may be able to get five years. If we can get something accepted that embodies the principle, then even with a time delay that is painfully wrong, we can start negotiating.&nbsp;</p><p>Entities choosing to be transparent changes the game completely. Most of game theory assumes that each entity is opaque inside, but the evolution of human cooperation leverages the difficulty people have in lying. With transparent entities, it&#8217;s no longer a question of what they would decide is in their interest, it&#8217;s a question of what they can decide given how they are constructed. Open source design and construction can define this. This applies to active shields and will apply to future artificially intelligent agents.</p><h1>Navigate the Traps: Avoid First Strike Instabilities</h1><p>Let&#8217;s imagine we succeed at designing an encrypted multipolar monitoring system that could detect attacks and enforce appropriate responses. To work in practice, it also must be deployed without causing a first-strike instability.&nbsp;</p><p>Given the uncertainties in a potential conflict, all parties have much to gain from simultaneous multilateral deployment of a mutual defense system. But, even in a close to best-case scenario&#8212;a system, if deployed, monitors for offensive use and takes action to prevent that use&#8212;the danger remains that one side might deploy before its competitors. Unfortunately, the technological designs resulting from sophisticated design-ahead also create a first-strike instability. This results in a <em>first-strike instability</em>, such that even if no party wants to start a conflict, the fear that another party might incentivizes a first strike.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-28" href="#footnote-28" target="_self">28</a> We have no simple answer to this problem.</p><p>The goal is to move towards a strong framework of norms of voluntary interaction with a highly multipolar set of interests. Nevertheless, this world must be emerging out of a world that does not have that high degree of decentralization yet. We live in a world that is militarily dominated by the United States. China and its military power are rising. Any transition to a balanced multipolar world order would require them to give up considerable military power.&nbsp;</p><p>If either of them had the ability to deploy a privacy-respecting monitoring fabric, it would also have the ability to deploy a fabric that is as powerful but not constrained by the safety protocols. Previously, the NSA had an ability to deploy the ThinThread fabric that obeyed privacy-preserving protocols. This was the active project that was being pursued but killed in favor of the more intrusive fabric. If we can answer the question how we can get a global credibly self-constraining monitoring fabric despite the preference of today&#8217;s military powers, we are in good shape to worry about enforcement.&nbsp;</p><p>We admit to the great fear that an active shield is itself a permanent military takeover. The attempt to build it, deploy it, and get it entrenched can go very badly but so will civilization if we don&#8217;t try. If we try we have a chance of succeeding, but if we don&#8217;t try we end up with a system that does not even presume to have minimal internal controls. We must try to build something whose stable point is a neutral framework of rules where judgment is distributed.</p><h3>Compensate:&nbsp;Mutual Defense, Commerce, and Science&nbsp;</h3><p>The more decentralized the systems we build now, the more likely future systems will follow this trend. The most obvious compensating dynamic is military mutual defense pacts. Turkey, Germany, and Italy have a nuclear sharing system or mutual defense pact that can be seen as a model for the multipolar framework of an active shield system. In&nbsp; mutual defense pacts of multiple separately deployed active shield subsystems, when one party misbehaves, the pact among the others can cooperate to restore the balance.&nbsp;</p><p>International trade is another means to push back the evolution of systems of force in favor of cooperation. If nation states are large circles, companies are smaller circles within them, and multinational companies are circles cutting across them. A multinational is still incorporated in a particular country that can corrupt it. But governmental&nbsp;behavior emerges from interest groups. Many interest groups benefit from trade. As trade with the rest of the world increases, it is increasingly in their interest to pressure their governments not to pursue external military dominance. During the British hostilities with their American colonies, British merchants, hurt by trade losses, were amongst the strongest proponents of peace. Since large militaries are not actively engaged in taking commercial actions, the danger of their dominance is not yet recognized enough. We hope to sensitize you that it is in your short-term interests to engage in multilateral paths in a mutually observant way that avoids centralized military force. </p><p>Sometimes, sets of citizens across nations can form explicit voluntary bonds to compensate for international power escalation. The Pugwash Conference was initiated by scientists, through the Einstein-Russell Manifesto, calling for a conference to assess the dangers of mass destruction. US, Soviet, and other scientists continued to meet during the Cold War, through the Cuban Missile Crisis, and the Vietnam War to draft background work for the Non-Proliferation Treaty, and the Biological and Chemical Weapons Conventions. Similarly, the Asilomar Conference was based on the realization of genetic engineering dangers. It resulted in scientists voluntarily deciding to halt experiments using recombinant DNA. This was a voluntary agreement to coordinate to not create a danger no one wanted to create. If 99% of us stay within voluntary safety controls, then even if 1% starts going off the rails, we can rely on the superintelligence of the rest of civilization to help.&nbsp;</p><p>If we can increase our multipolarity while future technological realities are emerging, we have a chance of having arrived at a world that is no longer takeoverable because there are enough capable forces that don&#8217;t want to be taken over. If one power commits suicide, its place and resources can be taken by other powers who have not committed suicide. There is no guarantee for stability. In the Peace of Westphalia, dozens of military parties maintained their strategic balance in an antifragile way. The Peace of Westphalia was multipolar for centuries, but it eventually collapsed. Likewise, the Founding Fathers were not confident that their arrangement would not collapse into a dictatorship. They were in a terrifying situation and their only choice was to do the best they could. Today their division of power is not functioning as well as originally designed, but it has lasted for centuries, and it is difficult not to consider that a success.&nbsp;</p><p>The constitution worked well enough that the intelligence of our civilizations can iterate on the issue of preserving the balance. Now, we are in the terrifying situation of witnessing military proliferation powered by strong technology. Our only choice is to do the best that we can. If we set up a sufficiently multipolar world prior to the emergence of much greater intelligences, we may be able to defer part of the power balancing problem to them. We need to get to the point where it's our descendants' problem, but where we have enabled them to address it because we didn't hand them a dictatorship.&nbsp;&nbsp;</p><h1>Chapter Summary</h1><p>We discussed the dark side of our rapidly maturing civilization: risks that come from automating technologies multiplying individuals destructive ability. Tempting solutions, such as strong central actors that monitor and control attacks, have their own risk of power suicide which may be worse than what they prevent.</p><p>Less obvious solutions are gradually emerging, thanks to innovations in cryptography. A multipolar active shield would rely on automated encrypted sensor sousveillance of security-relevant activity. It would only disclose its detection when pre-agreed tripwires suggesting illegal activity are triggered. If the shield detects such activity,&nbsp; its robotic enforcement arm prevents the rogue node from engaging in hostile actions. The shield&#8217;s multipolar deployment, by which many nodes watch both relevant activity and each other, avoids allowing dangerous activity outside and within the fabric to reach a threatening level.</p><p>If we can design and deploy such a system, we may feed two birds with one scone; we take the sting out of the inevitable automatic surveillance and enforcement, and use it to prevent other dangers. While automated sousveillance and enforcement seems like an impossibly far away dystopia, the future will grow out of&nbsp; today&#8217;s decisions. The better the structures we put in place soon, the less of a dystopia it will be.&nbsp;</p><div id="youtube2-4A0fUAkOkxY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;4A0fUAkOkxY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/4A0fUAkOkxY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Curious for more? Listen to this <a href="https://foresight.org/salon/transparent-society-sousveillance-david-brin-author-of-the-transparent-society/">Intelligent Cooperation seminar. </a></p><h3>Next up: <a href="https://foresightinstitute.substack.com/p/defend-cyber">DEFEND AGAINST CYBER THREATS | Computer Security</a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://theprecipice.com/">The Precipice</a> by Toby Ord.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://futureoflife.org/background/benefits-risks-biotechnology/">Benefits &amp; Risks of Biotechnology</a> by Future of Life Institute.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://nickbostrom.com/papers/vulnerable.pdf">Vulnerable World Hypothesis</a> by Nick Bostrom.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://www.ellsberg.net/doomsday/">DoomsDay Machine</a> by Daniel Ellsberg.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://drive.google.com/file/d/1DdVdVrawJUGdGAVELyO_1_-6gcjNAgR6/view?usp=sharing">Fortune Poll International Affairs</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>These two individuals were recently honored with the <a href="https://futureoflife.org/future-of-life-award/">Future of Life Unsung Hero Award.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><a href="https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons">Treaty on the Nonproliferation of Nuclear Weapons.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><a href="https://philpapers.org/rec/MANSFA-3">The Fragile World Hypothesis</a> by David Manheim.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>According to <a href="https://en.wikipedia.org/wiki/Doomsday_Clock">The DoomsDay Clock</a>, a symbol representing the likelihood of a man-made global catastrophe, maintained since 1947 by the members of the Bulletin of the Atomic Scientists, we are closer to midnight than ever before. From 3 minutes to midnight when the Soviets kicked off the nuclear arms race by testing the first nuclear weapon in 1949, we moved to 100 seconds to midnight in 2020.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>The video <a href="https://www.youtube.com/watch?v=LVwD-IZosJE">Why We Should Ban Lethal Autonomous Weapons</a> is a great way to recalibrate one&#8217;s fear.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p><a href="https://www.youtube.com/watch?v=fn3KWM1kuAw">Do You Love Me?</a> by Boston Dynamics. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p><a href="https://www.nap.edu/catalog/21666/potential-risks-and-benefits-of-gain-of-function-research-summary">Potential Risks &amp; Benefits of Gain-of-Function Research</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p><a href="https://en.wikipedia.org/wiki/Biological_agent">Biological Agents</a> on Wikipedia.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p><a href="https://www.theatlantic.com/magazine/archive/2017/07/the-worst-problem-on-earth/528717/">How to Deal with North Korea</a> by Mark Bowden.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p><a href="http://www.zyvex.com/nanotech/feynman.html">There&#8217;s Plenty of Room at the Bottom</a> by Richard Feynman. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p><a href="https://www.oxfordmartin.ox.ac.uk/downloads/academic/201310Nano_Solutions.pdf">Nanosolutions for the 21 Century</a> by Eric Drexler and Dennis Pamlin. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p><a href="https://web.archive.org/web/20181123032424/https://www.theguardian.com/science/small-world/2013/oct/14/big-nanotech-post-industrial-manufacturing-apm">Towards Post-industrial Manufacturing</a> by Eric Drexler.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p><a href="http://www.overcomingbias.com/2018/11/world-government-risks-collective-suicide.html">Why World Government Risks Collective Suicide</a> by Robin Hanson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p><a href="https://mason.gmu.edu/~rhanson/greatfilter.html">The Great Filter</a> by Robin Hanson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p><a href="https://www.eff.org/press/releases/national-security-letters-are-unconstitutional-federal-judge-rules">National Security Letters Are Unconstitutional, Federal Judge Rules</a> press release by EFF.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Contrast this with Lavabit, the open-source encrypted webmail service that Snowden used to communicate with human rights lawyers.&nbsp; It shut down its services in response to what is believed to be a United States government order to reveal or grant access to information. It later re-launched with an improved privacy environment.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p><a href="https://spectrum.ieee.org/automaton/robotics/military-robots/lethal-autonomous-weapons-exist-they-must-be-banned">Lethal Autonomous Weapons Exist; They Must Be Banned</a> by Stuart Russell et al.  </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p><a href="https://archive.org/details/enginesofcreatio0000drex">Engines of Creation</a> by Eric Drexler.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p><a href="https://www.davidbrin.com/transparentsociety.html">Transparent Society</a> by David Brin.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p><a href="https://samharris.org/podcasts/special-episode-engineering-apocalypse/">Engineering the Apocalypse</a> by Rob Reid and Sam Harris.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>For instance, the prediction market Augur uses reputation tokens to pay its community to serve as an Oracle that correctly resolves predictions. Thus far, this worked well even for resolving contentious situations such as Vitalik Buterin&#8217;s bet on Trump losing the 2020 elections. Chainlink is a decentralized oracle project that seeks to provide accurate data, for instance on DeFi prices, via flexibility across multiple nodes so that if one node fails to report the correct price, the other nodes can nevertheless form a consensus on prices. Looking ahead, if verifiers increasingly rely on contradicting information sources, arriving at truth-tracking Oracles may become more challenging. Even defining what counts as evidence such that it is still relevant in a future that is very different from the present becomes challenging.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p>&nbsp;In chapter 6, we saw that given the voluntarism of the world in which systems like Ethereum were built, we could in theory have much greater diversity with smaller scale decisions because we don&#8217;t need that kind of centralized decision about what the rules are. However, the nature of the risks of bad physical enforcement mechanisms, i.e. robots engaging in violence, is so dangerous that if we do achieve something that is even close to as incorruptible as Ethereum for our governance problems, we should be extremely proud.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-28" href="#footnote-anchor-28" class="footnote-number" contenteditable="false" target="_self">28</a><div class="footnote-content"><p>&nbsp;<a href="https://stevenpinker.com/publications/better-angels-our-nature">Better Angels of our Nature</a> by Steven Pinker.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[6. GENETIC TAKEOVER | A More Cooperative World]]></title><description><![CDATA[Previous chapter: IMPROVE COOPERATION | Information, Money, Rights, and Contracts]]></description><link>https://foresightinstitute.substack.com/p/genetic-takeover</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/genetic-takeover</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:24:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/6HASTSqsZ7M" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous chapter: <a href="https://foresightinstitute.substack.com/p/improve-cooperation">IMPROVE COOPERATION | Information, Money, Rights, and Contracts</a></h3><h1>Getting There: Genetic Takeover</h1><p>Cooperation could be rich, thanks to a suite of new technologies. Prediction markets, space property rights, mixed contracts, assurance contracts, and the other arrangements envisioned so far are just the tip of the iceberg. <br><br>It was hard to imagine today&#8217;s internet before the first website. Likewise, it is hard to imagine the novel cooperative arrangements ahead. But it's worth trying; after all, in Alan Kay&#8217;s words, &#8220;<em>the best way to predict the future is to invent it</em>&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> We may be as far from this new world as the ARPANET was from the current web, but this shouldn&#8217;t stop us from dreaming big. Most of these experiments will fail but the few that succeed may radically lower the costs of cooperation.</p><p>Lowering cooperation costs by orders of magnitude is not just a quantitative difference. Whenever we have several orders of magnitude difference in a quantity, it often causes a change in the phenomenon&#8217;s character best seen as a qualitative difference. We might usher in a new era of cooperation, analogous to when the 1500s and 1600s commercial revolution in the Mediterranean world resulted in an inflection point toward today&#8217;s much richer world of exchange.</p><h3>Co-evolve, Cooperate, Compete</h3><p>How might we transition into a more cooperative world from our paper world, in which printed contracts dominate, alongside checks as the second most preferred way to pay bills?</p><p>How cities are built, how people self-organize, and what kind of society they organize into is largely based on how the economics of defense and attack evolve with technological progress. Law, for instance, evolved to deal with problems as they arose using the level of technology that was available to people at the time. Today&#8217;s legal systems evolve by competing on at least three axes. First, on how well they insulate us from underlying rules of biology, such as violence. Second, on how well they create rules for their own survival, such as capitalism vs. communism. Third, on how well they insulate us from their own dynamics, such as separation of powers to allow for watchers being watched.</p><p>The invention of <a href="https://www.youtube.com/watch?v=kOFzisF7aNw">cryptography technology</a>, coupled with worldwide networks, changes the economics of defense and attack of today&#8217;s legal systems. This change is greater than going from the bow and arrow to the gun. Once again, society will have to self-organize around this change to reach a new equilibrium. Where will we settle? Even if new technologies of cooperation allow us to build a perfect realization of a neutral framework of rules, we still rely on the legal system to insulate us from the rules of biology. Both systems will co-adapt, each constraining the other without either one emerging as dominant.</p><p>Let&#8217;s look at two popular positions in cyber debates: Let&#8217;s call one perspective <em>legal absolutism</em>. It states that human law, enforced by governments via their monopoly on violence, will remain the dominant point of reference. In its most exaggerated form, this perspective holds that anything legislatable has the upper hand. It amounts to denying reality in the face of technological advance, comparable to trying to legislate that pi equals 3.</p><p>Let&#8217;s call the other perspective <em>code absolutism</em>. It states that code is law, which will take over the legal system and displace all of human law. This ideal is unrealistic, because the people who write code and the computer hardware that runs it are in government-controlled physical spaces. As long as governments deploy legal systems with strong public support, many computer systems will enforce existing law.</p><p>Rather than seeing the crypto or legal absolutism caricatures as attainable goals, they are better regarded as side constraints on the evolution of real systems.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-bzr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-bzr!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 424w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 848w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 1272w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-bzr!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif" width="852" height="196" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:196,&quot;width&quot;:852,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-bzr!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 424w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 848w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 1272w, https://substackcdn.com/image/fetch/$s_!-bzr!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff2ebd51c-cb97-43a4-abbc-8d9c58ad60ea_852x196.gif 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>We can borrow the term <em><a href="http://originoflife.net/takeover/">genetic takeover</a></em> from biology to describe such an alternative transition. A phenotype, highlighted in yellow, evolves through a new genotypic representation, G2, gradually emerging within an older one, G1. The legacy system provides context and constraints influencing what is viable in the new system that grows within it. The evolving phenotype now has entirely novel qualities, i.e. here it is spiky, rather than smooth.</p><p>Let&#8217;s imagine a world in which fences were not invented, but property rights were. People keep records of where boundaries lie; some can hire guards to protect their land and use the court system to prosecute boundary crossers. Eventually, someone invents a fence. A debate ensues between the legal absolutists, arguing fences are not legitimate versus the fence absolutists, arguing that property law has become irrelevant. There are obviously many lower-cost solutions to the boundary crossings problem than legal records. A fence may substitute for the law in some of these cases, but not in all cases, because people can climb over and go under fences.</p><p>Likewise, the world in which traditional legal systems emerged had no better choice; it did not have the enablers of strong cryptography and the net. Like a fence, a smart contract will substitute for some, but not all, cases in which we currently use legal systems. Nevertheless, since code can be buggy or we sometimes just want a human in the loop, paper law will remain a useful tool. A contract acts by provoking behavior in an enforcement mechanism; in a legal contract this is the legal system, while a smart contract is simply the choice of a different system. Our current legal systems have never been static and will change along with technology, requiring continuous updating regarding our notions of &#8220;rule-based systems&#8221;.</p><h3>Change the Rules Without Relying on the Rules</h3><p>Apart from honoring voluntary Schelling Points as constraints, we don&#8217;t necessarily have to rely on existing rules to change them. Often, policy hasn't been concretely articulated into laws. This leaves an intermediate space, which different policymakers have different ways of regulating. Premature filling of such regulatory gray areas could be disastrous for innovation. The U.S. Supreme Court had to overturn early railroad regulations drafted before we had sufficient experience with the railroad system.</p><p>The internet could not have emerged without many people operating in the emerging blank spaces of their time. Its rules are built on both political and technological interaction. Many political decisions were embodied in protocols such as those promoted by the <a href="https://www.ietf.org/">IETF</a>, which created a transnational rules system in which most countries let their citizens participate. It was a base system of rules founded on voluntary gray area interactions that, at that time, no one decided to try to suppress.</p><p>No rules clearly allowed the internet&#8217;s phenomena, but no one interpreted the ambiguity into an enforceable rule violation. It was unclear if doing commercial transactions online was legal. Many assumed it was not, but people just started doing it, regulators refrained from interfering, and it gradually grew into an accepted legal activity. Enough people were interested in seeing certain uses continue so that whatever was needed to continue those uses became technological reality. The internet changes the nature of the emergent rules, such that they increasingly operate by its logic.</p><p>Bitcoin nudges the rules further to settle ambiguities in favor of continuing to allow new activities. Cryptocurrencies are built from voluntary interaction that is costly to suppress. At most, a nation can take its citizens out of the game and weaken its own position. We have settled on the intermediate state of regulated on- and off-ramps into cryptocurrency via services like Coinbase. These convert dollars into crypto and vice versa. Yet, once on the other side of the boundary, private currencies let you transact without record-keeping, at least until you off-ramp into fiat currency once more.</p><p>Many of the more complex arrangements are not illegal, but are genuinely novel economic phenomena. Blockchain-based prediction markets may fall under gambling laws by one reading of U.S. law. But they are not treated as such by the mainstream economy. You can earn cyber coins betting on the future, withdraw those coins and trade them for fiat money. Know Your Customer (KYC) Anti Money Laundering (AML) efforts mean this directly borders current legal systems. Still, the lack of a central control point means that to shut the process down, one would have to go after people individually via the KYC. This is extremely costly given the ease of converting one cryptocurrency for another.</p><p>We are used to negotiation at the boundary between different rule-based systems; international law is an evolving consensus from negotiation at jurisdictional boundaries. We can compare the interaction of the legacy system with the emerging system to interactions among jurisdictions. When jurisdictions have contradictory rules, engaging in multi-jurisdictional transactions requires complex negotiations. Despite the messiness of those negotiations, they often succeed. Transacting across the legacy legal system which is backed by the government's monopoly on violence and a tech-enabled cooperative system, backed by cryptography and strong anonymity, will be equally messy. As these systems encroach on each other&#8217;s realms, both systems will try to fight back. As this fight becomes increasingly costly, they may converge on a new legal-technological reality.</p><h3>There Be Dragons: Risks of Centralization</h3><p>Can we say anything about the dynamics dictating the trajectory this emerging system? Progress relies on innovators wanting to introduce new cooperative arrangements into the world. Once introduced, the arrangements become much more valuable if not subject to their inventors' whims. As long as a centralized entity provides the arrangement, it does not have the benefits of multipolar checks and balances.</p><p>In the early days, computers had tremendous architectural diversity. Each computer had a little operating system and localized interactions through the timesharing system. The ARPAnet connected some of those computers by enabling them to email each other without suppressing their architectural diversity. Each was an independent experiment, but suddenly, via network effects, there was a larger community to cooperate with.</p><p>Fast forward to today, our interactions are already centralized in the Internet Protocol (IP) on the web. There aren&#8217;t many completely different hypertext experiments that do not play with it. However, the web itself is largely decentralized with protocols allowing many different parties to communicate with each other. They function more like language than governments. <a href="https://doi.org/10.7551/mitpress/2170.003.0005">One could say</a> that the decentralized nature of the protocol achieves a system that is 99% decentralized and 1% centralized.</p><p>Realizing the dangers of a centralized power, the temptation is to regulate. Yet, to make progress toward voluntarism going up levels of abstraction, we need to rely on competition as a discovery procedure for finding out which systems work well. If innovators deliver more value by taking themselves out of the control position, they can find arrangements to make more profit by doing so. If they don&#8217;t find a way to deploy their arrangement without their controls, their competitors will.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Today; we have single web governing bodies but also efforts against them. The domain name system functions as a centralized decision-making point to allocate the namespace&#8217;s root names. The <a href="https://spritely.institute/static/papers/petnames.html">Petnames project</a> seeks to enable interconnectivity without a global namespace. The <a href="https://decentralizedweb.net/brewster-kahle/">Dweb movement</a> builds a distributed web infrastructure that compensates against concentrated control. If an architecture causes us to make overly centralized decisions, we should consider it a bug and <a href="https://blog.archive.org/2021/02/18/behind-the-scenes-of-the-decentralized-web-principles/">find a better one</a>.</p><h3>Learning from History: IBM vs. the PC vs. Google</h3><p>Can we rely on decentralized systems outcompeting centralized ones? A look at computer history provides a mixed picture: In the computer industry's early days, it was common knowledge that IBM's mainframe monopoly would lead to monopolizing the entire computer industry, unless the government broke it up. Antitrust forces were engaged against IBM; IBM fought back. It is possible that antitrust laws had little to do with IBM's loss of the dominant market position and that the personal computer's rise had everything to do with it. Microsoft then outplayed it, and became the monopoly supplier of PC operating systems and most office applications.</p><p>For a while, Microsoft was so dominant in software that it seemed it would remain the monopoly forever, unless broken up. Antitrust forces were engaged against Microsoft, and Microsoft fought back. Once again, it is possible that antitrust forces had little to do with Microsoft losing its dominant position as opposed to the rise of the web. Microsoft was a substantial web player, with Internet Explorer being an initial browser wars winner, but that dominance was unstable.<br><br>Gradually, today's internet giants emerged and turned email into Gmail, Usenet into reddit, blog replies into Facebook, pingbacks into Twitter, squid into Cloudflare, and gnutella into The Pirate Bay. By having a Gmail account, Google and you engage in a voluntary interaction with each other. Nevertheless, if you violate vague terms of service and they arbitrarily banish you, everything you managed for that identity becomes inaccessible. When restricting or cutting off access, internet giants aren't subject to accountability or due process. Within this digital jurisdiction, when banned you become a non-person, exiled due to unrevealed criteria. This is a very poor form of cooperation.</p><p>If we think of these giants as an arrangement through which we cooperate, we don't want those interactions to be placed at risk by a third party. The problem with the giants is not just power centralization, but that their power feels arbitrary. If we don't know why our accounts could be suspended, we live in a state of vague fear, reminiscent of a lawless state of nature. We need rule of law and an understanding of what the rules are so we understand how to cooperate. In an extreme view, our internet giants act like historically oppressive regimes, dominated by their rulers&#8217; unpredictable whims rather than by laws or rights of citizenship.</p><p>As in the previous Microsoft case, the centralization wave of today's internet giants has been swiftly accompanied by antitrust efforts. But once again, we should rely on the rise of new innovations to solve the problem. Chatbots might out-compete Google, and the rising tide of specialized (social) media platforms might outcompete Facebook and Twitter. This repeating dynamic provides hope that across future systems, coalitions of non-dominant players will continue to compensate against power centralization of giants.</p><p>It took centuries for today&#8217;s open societies to outcompete tyrannical ones, just as it similarly took decades for open source software to outcompete proprietary software. We should not expect quick wins of decentralized systems over centralized service providers, even if the decentralized ones bring a confident rule of law. There will be many failed ventures, and it will take time to build up an adequate level of functionality, but we have repeatedly witnessed that long-term winners are those which create a rules framework leading to a predictable basis for cooperative interaction with minimal risk. As long as future levels of the game are defined by a rich taxonomy of rights and composability of contracts, cooperation can evolve.</p><h2>Chapter Summary</h2><p>Starting from our paper world, we embark on a co-evolution in which legal and digital jurisdictions develop new modes of cooperation within constraints imposed by the other. In the digital domain, a polycentricity of experiments can compensate for centralization. A voluntary future of ever richer cooperative games awaits, yet there are serious physical and digital threats looming along the way; a topic we'll examine next.</p><div id="youtube2-6HASTSqsZ7M" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;6HASTSqsZ7M&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/6HASTSqsZ7M?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Checkout <a href="https://foresight.org/salon/a-peaceful-transition-into-cryptocommerce-jim-epstein-primavera-de-filippi-brewster-kahle/">this seminar</a> to take the first steps into a more cooperative world.</p><h3>Next up: <a href="https://foresightinstitute.substack.com/p/defend-physical">DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields</a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Alan Kay shared these ideas at a 1972 talk at the Xerox Palo Alto Research Center.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>One centralization risk for a tech-based cooperative system has been well articulated for the computer security foundation that it&#8217;s built on. TOR is the onion router that protects against knowing a message sender&#8217;s physical location by routing between hops with multiple inputs and outputs. Still, we don&#8217;t know if TOR nodes are conspiring. If 90% of the TOR nodes are run by the NSA &#8212; and you cannot know whether they are or are not&#8212;it's not providing the desired privacy protection. This is the <em>Sybil attack</em>, which involves falsely simulating a highly multipolar world. <br><br>Ironically, this danger is a collateral consequence of our ability to cooperate anonymously, one of our greatest protections. Because we do not know who is running the TOR nodes, we do not know if they are conspiring. If TOR nodes are not conspiring, TOR helps provide anonymity, which could create the Sybil danger elsewhere. Even if TOR was corrupted by Sybil, it would not prevent the Sybil danger elsewhere. All the rest of us are still anonymous to each other, even if somebody knows everybody's identity. <br><br>To defend against the Sybil attack, we need to build active compensation dynamics into our new cooperative infrastructure. Bitcoin is spread across jurisdictions such that any single corrupting jurisdiction is just taken out of the game. But one government could threaten the lives of their Bitcoin miners to force them to do something corrupt. Proof of location is an economic attestation that a computation is happening at a particular location at a particular time. This can incentivize decentralization by creating rewards for computing in spread out geographic areas. Geographic diversity means jurisdictional diversity, making governmental interference and collusion more costly. Rather than relying on entities to naturally keep each other in check, proof of location incentivizes multipolarity, thus creating compensating dynamics against centralization risks.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[5. IMPROVE COOPERATION | New Info, Money, Rights, and Contracts]]></title><description><![CDATA[Previous chapter: SKIM THE MANUAL | Intelligent Voluntary Cooperation & Paretotropism]]></description><link>https://foresightinstitute.substack.com/p/improve-cooperation</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/improve-cooperation</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:23:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/Cjxl11sAbN0" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Previous chapter: <a href="https://foresightinstitute.substack.com/p/skim-the-manual">SKIM THE MANUAL | Intelligent Voluntary Cooperation &amp; Paretotropism</a></h3><p><br>Having read the manual for civilization and analyzed previous rounds of the game, let's turn to how to make the most of future rounds in this chapter. Over the course of civilization, we developed rights, contracts and various other mechanisms and institutions to help us cooperate. But challenges remain: it is costly to search for other players to cooperate with, negotiate an agreement that is attractive to all sides, credibly commit to it, and enforce the terms once they are set. Fortunately, we can look to technologies to lower these costs and open up new levels of cooperation.</p><h1>Free Information</h1><p>Legal systems emerged to establish neutral rule systems that support cooperation without violence. Governments and their rules are based on jurisdictions, yet many of us no longer know or care where the people we communicate with are. The internet not only makes it dramatically easier to find others to collaborate with but it also produces a permanent system of voluntary rules that is very costly to suppress.</p><p>Free speech is a great example. Governments imprison people for violating various national speech codes. Edward Snowden is in political asylum because he is rightly worried the US won&#8217;t grant him a fair trial for releasing information to the public. Despite this, all the servers storing the released information, and all the routers transmitting that information create an emergent phenomenon by which the information stays public.</p><p>It would require global totalitarianism to reliably and permanently remove information from public availability. In the absence of that, any single entity that tries to erase widely dispersed information renders itself irrelevant, or worse, achieves the opposite of their desire; the <em>Barbara Streisand Effect</em>.<a href="#fn3egzy7fziap"><sup>[1]</sup></a></p><div id="youtube2-7g5p15vM0OA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;7g5p15vM0OA&quot;,&quot;startTime&quot;:&quot;2467s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/7g5p15vM0OA?start=2467s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://foresight.org/salon/audrey-tang-taiwan-digital-minister-tools-for-openness-asia-beyond/">Watch Audrey Tang&#8217;s seminar on Tool&#8217;s for Openness.</a> </p><h3>Prediction Markets</h3><p>Free information is a foundational pillar of cooperation but today's information is often rather disorganized. Can we do better by supercharging how we make sense of it? Philosopher <a href="https://press.princeton.edu/books/paperback/9780691210841/the-open-society-and-its-enemies?srsltid=AfmBOorUqzH_0ACN9u2eXMjWqR1wfMv1hQHIEGxqQisGbSqOaSY1qN-k">Karl Popper</a> observed that knowledge, much as biology, evolves by a process of variation, replication, and selection. <em>Variation of knowledge</em> as in tossing new ideas out there, <em>replication of knowledge</em> as in spreading ideas, and <em>selection of knowledge</em> as the criticism of ideas.<sup> </sup>A healthy info system will allow good ideas to rise to the surface.</p><p>Prediction markets are such a system. Take the early market set up at Foresight&#8217;s 1999 annual gathering (screenshot below). People would contribute their expertise by opening a claim about the future for betting on a platform. For instance, &#8220;World product doubles 2025-30&#8221; tracks the likelihood of a substantial increase in world economic productivity by 2030.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Prices represent a bet that a particular event will occur. If others believe the future price will be higher than the market indicates, they buy, and in so doing raise the consensus price. For instance, the odds for World GDP doubling were at 30-38% in 1999, moving to a more conservative 25-50% by 2000.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OG0M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OG0M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OG0M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg" width="1456" height="1447" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1447,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:725103,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OG0M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OG0M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4354a7cf-6da8-4661-a53f-e88db13cf2e6_1869x1857.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>According to prediction market pioneer <a href="https://www.academia.edu/119196368/Idea_Futures">Robin Hanson</a>, these markets can help swap our reliance on poorly incentivized experts for a market in ideas that encourages individuals to contribute their specialized knowledge: <em>&#8220;if markets create a consensus about the value of an ownable item, such as the price, futures markets create an immediate consensus about future consensus.&#8221;</em></p><p>Plenty of variations are possible. <em>Replication markets </em>would be prediction markets that estimate the reproducibility of scientific research, <em>reputation markets </em>would trade in the estimated reputation of individuals or corporations and <em>incentive markets</em> would allow participants to incentivize actions by betting against them, making it lucrative for others to go after that very same action to prove the prediction wrong.</p><p>We can&#8217;t wave a magic wand and create a world free from interest groups trying to manipulate the information ecosystem in their favor. Instead, we can create a robust system against those forces. While we can&#8217;t assume that all problems are solvable, we should not underestimate the cleverness of games with novel payoffs for solving our problems. There will always be imperfections that an omniscient god could resolve. But the key feature of this system is that the overall direction of opinions results from the voluntary interaction of many individuals without anybody in charge. History showed that an alternative system of knowledge that could be commanded to operate in a particular way risks extremely destructive commands.</p><div id="youtube2-lKEK2j0zrcY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;lKEK2j0zrcY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/lKEK2j0zrcY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to this <a href="https://foresight.org/salon/prediction-replication-markets-augur-metaculus-gnosis-oracle-problems-beauty-contests/">group discussion on prediction markets on the problems they may solv</a>e. </p><h1>New Rights to (Do) Things</h1><p>When introducing property rights as one of the most basic institutions in the last chapter, we suggested that <em>rights to things</em> are really heuristics for <em>rights to do things</em>. But sometimes efficient rights to do things don&#8217;t correspond cleanly to tangible objects or geometric slices of the world. Our initial notion of property emerged from the game theory of trying to coordinate with each other.</p><p>Because the Earth&#8217;s crust moves slowly, we could force our notion of property onto this very na&#239;ve geometric notion corresponding to slices of the world. But there is no simple objective source of property boundaries in the physical world because physics itself has so many nonlocal interactions: If I move my arm so I am sending gravitons through your body, am I trespassing? As civilization and our desire to cooperate in complex ways grew, we advanced from geometrical slices of the world to require things like the rights of way so planes could fly over people's property, radio spectrum, pollution credits etc.</p><h3>Space Property Rights</h3><p>We should expect a future rich with property rights experimentation. For instance, take property in outer space: Plots of land do not necessarily generalize to bodies moving through space in ballistic trajectories. In our solar system, bodies orbit around each other and occlude each other with respect to sunlight and its energy. What if someone builds a <a href="https://en.wikipedia.org/wiki/Dyson_sphere">Dyson Sphere</a> between me and the Sun? Even if it didn't intersect my orbit, does it violate my property since I can no longer receive energy from the Sun? Determining rights in terms of orbits or radiation is clumsy and difficult.</p><p>Historical denotations of plots of land is not a bad starting point for outer space property rights. However, by starting with something physical as an initial endowment, we face substantial transaction costs when rearranging our endowments into something adapted to the activities we want to engage in.</p><p>Imagine that if you want to use a piece of radio spectrum as it passes through many plots of land, you would ultimately have to acquire it from every plot owner over that area. With property rights boundaries as vertical slices and the efficient property rights values as horizontal slices cutting through the vertical slices, this is a worst case misallocation of resources that maximizes transaction costs. Trying to get from the vertical regime to the horizontal regime by individual trades becomes extremely hard to do. For outer space property rights, we need to avoid outrageously high transaction costs.</p><p>If we discover a workable space property arrangement, it needs to be implemented with enough initial legitimacy that it isn&#8217;t immediately contested. Establishing legitimate resource claims relies on a secure title registry. We want to depend on this registry to do the right thing, not due to offered incentives, but because that is what its program requires. A permissionless blockchain might do the trick.</p><p>The <em>blockchain</em>, has two characteristics making it ideal for creating an incorruptible cooperation layer. First, it operates according to its stated specifications. Bitcoin is a virtual computer built out of agreement rather than physical hardware. A tremendous number of separate machines replicate the same computation and cross-check each other. While physical hardware may have security trap doors, if a quorum of many different algorithms that don&#8217;t trust each other agree on a transaction, this is a credible machine to run our computation on.</p><p>Blockchain&#8217;s second innovation is that it provides an agreed message order. A single piece of arbiter hardware can implement censorship by deciding to never see a message it doesn&#8217;t want to accept. In contrast, at least for proof of work, messages coming into the blockchain are coming into the memory pool, the transactions waiting room, and are visible by the blockchain&#8217;s miners. The miners compete to gather the messages into a block and publish them. This avoids the problem of double spending, which is fundamental to currency creation, something we'll discuss next.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>For registering space property rights, it is the combination of the blockchain&#8217;s ability to act according to the stated specifications and to reliably service requests, that makes it a candidate for creating incorruptible institutions. Knowing it&#8217;s the internal logic that makes it function reliably lets the rest of us incentive-align around it. <br><br>As long as there is no competing governmental title registry for the resources, claims on this title registry blockchain could be created without an opposing claim.<strong> </strong>There might be little interest in abolishing its legitimacy and much interest in upholding it. Suddenly, there are space property rights where previously there were none. Everyone is now either better or similarly off as in the absence of the registry. It's not impossible that establishing a blockchain-based system for space resources starts off seeming so unreal to people that it can grow to substantial legitimacy without competing government claims to the same resources.</p><div id="youtube2-qHFnz36Ll4E" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;qHFnz36Ll4E&quot;,&quot;startTime&quot;:&quot;2s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/qHFnz36Ll4E?start=2s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watch the seminar on <a href="https://foresight.org/salon/david-friedman-james-bennett-space-development-property-rights-legal-considerations/">Legal Considerations for Space Property Rights</a>. </p><h1>Reliable Money</h1><p>The legacy monetary system comes with <a href="https://time.com/5486673/bitcoin-venezuela-authoritarian/">a plethora of pathologies</a>: Robert Mugabe can print endless cash, inflating away the savings of Zimbabwe&#8217;s citizens. Vladimir Putin can freeze an NGO&#8217;s bank account, and refugees can get locked out of the banking system. Situations such as <a href="https://en.wikipedia.org/wiki/Operation_Choke_Point">Operation Choke</a> Point, when the US government used its control over banks to shut down unquestionably legal businesses, or when credit card companies blocked payments to WikiLeaks, show democracies aren&#8217;t insulated from these problems either.</p><p>Fortunately, Bitcoin happened. Built on the blockchain, Bitcoin injects internet properties such as programmability, interoperability, and composability into monetary value exchange, while guaranteeing its own scarcity. Bitcoin is a great example of a new layer of rules built on top of the internet&#8217;s permanence. We can think of its rules as a constitutional system for the digital jurisdiction, so that the web&#8217;s information economy can now operate within it. It directly threatens what offline jurisdictions consider their prerogative; minting money. Nevertheless, it succeeded at providing the world with a currency that is very costly to corrupt, even for governmental jurisdictions.</p><p>Because crypto offers a cross-jurisdictional, censorship-resistant exchange medium, it is not only attractive for people lacking access to institutions, but also for those who distrust them. It allows anyone to store, send, and receive money without asking permission and without proving one&#8217;s identity. Over half the world&#8217;s population live under an authoritarian regime, and could stand to benefit from this. In <a href="https://saifedean.com/tbs">Nassim Taleb</a>&#8217;s words, it offers an <em>&#8220;insurance policy against an Orwellian Future&#8221;. </em>Simple money is the most obvious high-leverage institution needing public non-corruptibility and our initial success is encouraging here.</p><h3>Private Transactions</h3><p>Bitcoin gives us monetary sovereignty. But Bitcoin, and almost all existing cryptocurrencies, put all transactions on an open public ledger, readable by anyone anywhere. Ethereum, and most blockchains that support smart contracts today, do all their contract execution on an equally public ledger. Although participation is said to be anonymous, in practice it is easy to do traffic analysis, a form of statistical reasoning, to correlate blockchain activity with players in the physical world. The tools for doing traffic analysis are quickly becoming a commodity. Because blockchains are immutable, malicious actors have lots of time to learn more about our activities.</p><p>Blockchains without strong privacy, i.e. most blockchains today, create dangerous new opportunities for real world crime. If real-world commerce moves onto such blockchains, private agreements would be near impossible. So-called rubber hose attacks and kidnapping come immediately to mind. Besides individual criminals, we should also worry about corrupt governments or interest groups engaged in human rights abuses or blackmailing.</p><p>Innovations like zero-knowledge proofs make it possible to encrypt transactions fully, while still allowing them to be verified via consensus. Blockchains with privacy would deny anyone the information needed to figure out who to attack. Currently, this level of privacy is available for cryptocurrency but not widely used. We should make it the default. </p><div id="youtube2-tId4hJddtUw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;tId4hJddtUw&quot;,&quot;startTime&quot;:&quot;31s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/tId4hJddtUw?start=31s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Check out this <a href="https://foresight.org/salon/zero-knowledge-enabled-cooperation-halo-2-aleo-zooko-wilcox-ecc-howard-wu-aleo/">seminar on Zero-knowledge enabled Cooperation.</a></p><h1>Complex Contracts</h1><p>Complex play can emerge from simple rules. Through commerce we gradually figured out how to use contracts to bind ourselves to complex games. Right of contract states that voluntary arrangements between consenting adults should be allowed because they allow adults to find novel ways to cooperate.</p><p>To cooperate between mutually suspicious parties, we mostly rely on legal contracts. Contract enforcement is expensive and most disputed commercial relations are well below the threshold for going to court or arbitration. This limitation creates barriers. Unless we are rich enough to take everything to court, complex forms of cooperation are restricted to those we trust and have long-standing iterated relationships with.</p><p>Being members of the same jurisdiction somewhat reduces risk since operating under the same law lets us assume each other will behave well. The internet enabled the novel opportunity to create a website and cooperate with millions of total strangers. Still, on the internet, you might engage with someone who is completely anonymous. Some cyber attacks are committed in jurisdictions where the attackers aren't subject to punishment.</p><p>The result is that cooperation among strangers at a distance is restricted to simple interactions, such as giving away information for free. Even taking payment in exchange for something, a rather simple deal in itself, is often enabled by third parties such as credit card companies taking on the transaction risk. Our ability to cooperate richly with the majority of the world whom we don&#8217;t know still remains underdeveloped.</p><p><em>Smart contracts</em> could change this. They were conceived by <a href="https://www.scirp.org/reference/referencespapers?referenceid=2905141">Nick Szabo</a> in 1996 to automate contract execution by embedding it "<em>in the hardware and software we deal with</em>&#8221;. Smart contracts are contract-like arrangements expressed in program code, where the program&#8217;s behavior enforces the contract&#8217;s terms. They create automated arrangements to which we bind ourselves by placing rights into escrow with the contract. These rights are only released back to the contract participants according to the agreed contract terms.</p><p><a href="https://lessig.org/images/resources/1999-Code.pdf">Lawrence Lessig</a> suggests we can achieve a particular end through different means, with legal contracts being one and computer code being another. Having only been able to build cooperative arrangements whose execution relies on human beings, we struggled to insulate them from human corruptibility. Smart contracts significantly increase the range of possibilities for what computer code can achieve. This makes cooperation reliable, credible, and trustworthy to an extent never achievable when institutions had to rely on human beings to function.</p><p>Ethereum, a general purpose blockchain, was the first real-world system to establish a sound, smart contract architecture. With Ethereum as a precedent, we now have a user-extensible rules system in which every contract is a new set of rules that cannot be shut down. Rather than being the last invention of this new level of the game, we should think of smart contracts as a key to unlock the next levels.</p><h3>Reusable Templates</h3><p>In the past, making attractive printed materials required going to a print shop. With the advent of laser printers and the Macintosh, suddenly regular people could create and print attractive designs at home. What followed was a lot of horrible-looking newsletters with tremendous overuse of fonts. People had yet to develop the skills to best use their new tools, so demand built to improve them. Early desktop publishing programs evolved with templates guiding users away from using <em>Comic Sans</em> for everything.</p><p>Similarly, our cooperative arrangements are currently crafted by highly specialized, highly paid lawyers. They are supposed to be good at writing contracts that don't fail in unexpected costly ways. Most of a legal contract is actually about how to deal with a variety of failure cases and pathological contingencies that a layperson never would have thought of. Early smart contracts will have effects no party would have wanted if they anticipated them. It takes time to develop tools and templates that embody expertise about unintended consequences and contingencies. Just as early, awkward design efforts evolved into today&#8217;s slick templates allowing anyone to create sophisticated publications, automated contracts will evolve from clumsy, costly proofs of concepts to ever more sophisticated plug-and-play templates.</p><p>While code for smart contract templates is expensive to create the first time, it can be reused indefinitely. The more smart contracts are used, the more they get play-tested with regard to unintended consequences so that we can rely on those that prove robust over time. As the ecosystem grows, so will the demand for improved skills. Those who improve first can create templates and tools to benefit the rest of us. <a href="https://doi.org/10.1007/978-3-642-37036-6_1">JavaScript</a> already delivers a secure and easy-to-use toolkit that empowers non-expert programmers to write simple smart contracts that can be understood by non-experts. Equipped with the ability to enforce rights and responsibilities in code at low cost, people across the world are empowered to exchange, trade, and cooperate in new ways.</p><h3>Split Contracts</h3><p>Not everything can or should be automated. We may never be able to flawlessly represent our intentions in legalese or code without risking an outcome not reflecting what we actually wanted. Dumb paper contracts lock in states without knowing what future participants will consider relevant. Lawyers try to freeze the next 10 years in hard-to-parse prose, only to litigate when the contract fails. When writing a contract, we are writing interaction rules for an unanticipated world. We face an alignment problem among current and future parties. An open contract is a plausible alignment strategy.</p><p>Rather than litigating and blaming when predetermined contracts fail, open contracts would let parties embrace change and improve the game&#8217;s next iteration. Often we will want to rely on a human at crucial steps to negotiate future parts of an agreement chain. They could take into account information from previous rounds and consider new circumstances when initiating the game&#8217;s next iteration. Preserving a human in the loop can be a useful feature. Sometimes we don&#8217;t just want the automated contract to be the only credible commitment. Instead we want the credible commitment to account for factors that require human judgment.</p><p>A <em>Split Contract</em> would divide the contract into two parts: the automated part directly enforced by software, and the prose humans interpret in the case of dispute. If either party decides to dispute the contract&#8217;s automated part, pre-chosen arbitrators judge the disputed terms and may overrule them. This has two advantages:</p><p>First, the contract terms are fine-grained; specifying the amount paid upon committing to the contract, the amount paid upon delivering the answer, and the amount paid when agreeing it satisfies her criteria. Dumb paper contracts leave the default situation determined by the asset&#8217;s default location. By escrowing money, the contract changes this default distribution. In Nick Szabo&#8217;s terms, from possession being 90% of the law, we move to behavior becoming 90% of the law. By rearranging the burden of initiating a dispute, both parties get skin in the game to comply.</p><p>The second innovative feature is the fine-grained negotiation process for the contract terms. Payment amounts, prose text, and arbitrator selection are left up to the discretion of the contractors. By co-negotiating those parameters upfront, they can make expensive dispute resolution less likely. We dispute a contract hoping we win, but it carries some expense and risk. The obvious cost is that human arbitrators are expensive, but a more subtle one is uncertainty. If we don't dispute, we are stuck with a disliked, but known, outcome. Yet, once subject to the arbitrator's judgment, the result is less predictable. If we could have predicted it, we could have written a more mechanical prediction into the contract in the first place. Reducing the odds of a human dispute in the first place is a great hack for reducing enforcement costs.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div id="youtube2-I7FUI4M5vGI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;I7FUI4M5vGI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/I7FUI4M5vGI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watch this <a href="https://foresight.org/salon/split-contracts-comp-law-decentralized-arbitration-chip-morningstar-meng-weng-federico-ast/">seminar on Split Contracts and Decentralized Arbitration</a>. </p><h3>The Ancient Technology of Reputation</h3><p><em><a href="https://papers.agoric.com/assets/pdf/papers/markets-and-computation-agoric-open-systems.pdf">Mixed Contracts</a></em> combine the best of contracts with reputation. To see the value of reputation, let&#8217;s compare our consumer relationships to an iterated prisoner's dilemma in which a producer is defecting on the arrangement by selling a worthless product. To retaliate in the next round, we must notice this defection and decide to warn others away. Since this process is expensive and we frequently find ourselves in non-iterated games, reputation agencies have evolved. Reports by consumers put the producer in an iterated situation with the community as a whole while trademarking establishes valuable long-term producer reputations.</p><p><em>Reputation</em>, if paired with contracts, can help create entirely novel arrangements. Crucially, such a reputation system could also work if Carol maintains pseudonymity. If a <em>negative reputation system</em> is one where participants avoid entities with a bad reputation, a <em>positive reputation system is</em> one in which participants seek out those with a good reputation. Pseudonyms are problematic for negative reputation systems, because it is easy to let go of a negative reputation by switching between pseudonyms.</p><p>Positive reputation systems can still succeed as long as no one can claim another&#8217;s identity. Imagine pseudonymous Alice and Bob would like to draft up a contract that relies on legal enforcement. Let's say Alice also has a contract with Carol, so she is subject to legal enforcement that is invisible to Bob. Carol can assure Bob that she has enforcement powers, since she knows Alice&#8217;s real identity. If she is reputable enough, this may assure Bob even if the specifics are opaque to him.</p><p>If Carol wants to continue being consulted for arranging deals between the Alices and Bobs in this world, she will make sure that she can honor her part of the contract. This role is similar to the role credit card companies played in enabling today's e-commerce. The demand for reputation information may eventually provide a market for pseudonymous reputation services, analogous to credit rating agencies and consumer experiences<em>.</em></p><div id="youtube2-rwj46cwa0Sc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;rwj46cwa0Sc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/rwj46cwa0Sc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Check out this <a href="https://foresight.org/salon/arthur-breitman-tezos-blockchain-governance-marc-stiegler-the-digital-path/">seminar on Blockchain Governance</a>. </p><h3>A New Menu of Organizational Choices</h3><p>When starting a new organization, today's laws constrain founders to choose from a menu of confusing options ranging from a for-profit to a classic non-profit 501(c)(3), with several organizational types in-between. The combination of court cases and legislative compromise that led to the particular combinations available in each state is so complicated that only a lawyer can untangle them. They solidified in a very different past from our modern world, dating back to before the World Wide Web. <br><br>Compare this to a <em>Decentralized Autonomous Organization, </em>aka <em>DAO.</em> It&#8217;s a network-native entity with no associated central management and its own asset pool. Smart contracts between the organization stakeholders may decide how to use the assets. Through DAOs, people can participate in many new organizational structures that distribute reputation or money according to value contributed or make decisions via the wisdom of the crowd. It is very costly for people within the organization to prevent the relevant set of immutable smart contracts from executing as specified. Because they run on a blockchain with international nodes, these contracts are more resistant to internal and external corruption. This makes them an excellent candidate for experimenting with previously unthinkable cooperative arrangements. <br><br>DAO templates, pol.is voting systems, and other emerging tools could equip the rising tide of internet communities with collective ways to pool assets and govern over them to pursue shared goals. Many experiments will fail but we only need a few successes to create an entirely new menu of organizational choices. After all, current institutions are also made of contracts.</p><div id="youtube2-p-RnH5zhNVM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;p-RnH5zhNVM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/p-RnH5zhNVM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watch this <a href="https://foresight.org/salon/daos-daostack-decentraland-sifchain-researchhub-vitadao/">group discussion on Decentralized Autonomous Organizations</a>.  </p><h1>Collective Action Problems</h1><p>The world is full of possibilities we all would like to see realized, but no individual has an incentive to contribute to. This is especially true if we think we can benefit without contributing; the classic free rider problem that is at the root of the world's nastiest collective action dilemmas. But what if, instead of worrying that a small contribution won&#8217;t make a difference, you could ensure that it does? Kickstarter allows you to pre-commit to fund the open source project of our choice if and only if enough others commit to do the same to reach a critical threshold. If we all want an outcome enough, we can often move there via assured agreement. If not, no action takes place and any committed funds are returned. Such <em>Assurance Contracts</em> are great tools for solving large, multiway group commitments.</p><h3>Dominant Assurance Contracts</h3><p>A problem with assurance contracts is that even with minor transaction costs there is too little incentive to contribute. Imagine an open source software improvement costs $800 to fund and is worth $100 to each of 10 people. If each person pays $80, they get the results and are all better off. But each person may choose not to donate because they may still benefit from the software bug fixed, even if they don&#8217;t contribute. If you think that no-one else will contribute, it&#8217;s rational for you not to contribute; however, if you don&#8217;t, then why would anyone else? Not contributing can quickly turn into a self-fulfilling prophecy.</p><p>A creative solution, introduced by <a href="https://www.cato-unbound.org/2017/06/07/alex-tabarrok/making-markets-work-better-dominant-assurance-contracts-some-other-helpful">Alex Tabarrok</a>, is the <em>Dominant Assurance Contract</em>. It&#8217;s an assurance contract with the added condition that if the funding benchmark isn&#8217;t reached, the sponsor must pay a prize to the pledgers. Pledging now becomes a dominant strategy, or in Tabarrok&#8217;s words, &#8220;<em>a no-lose proposition&#8212;if enough people pledge you get the public good and if not enough pledge you get the prize.&#8221;</em></p><div id="youtube2-Cjxl11sAbN0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Cjxl11sAbN0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Cjxl11sAbN0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to this <a href="https://foresight.org/salon/dominant-assurance-contracts-alex-tabarrok-george-mason-university/">seminar on Dominant Assurance Contracts. </a></p><h3>Quadratic Funding</h3><p>Another potential solution to help with the free-rider problem of public goods is <em><a href="https://doi.org/10.2139/ssrn.3243656">Quadratic Funding</a></em>. By pledging even small amounts to desired goods, participants can direct the relative funding amount of external entities which agree to match the funds. The trick is that matched amounts are calculated by using the quadratic funding formula. This is where the amount a project receives is proportional to the square of the sum of the square roots of received contributions. Matching sums are based on the number of supporters, not the amount donated. This creates strong incentives to give even a little, ultimately countering incentives to free-ride.</p><p><em><a href="https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c">Retroactive Public Goods Funding</a></em> adds to this the layer that it&#8217;s often much easier to agree on what was useful rather than what will be useful. By rewarding public goods projects that were successful, it can incentivize their creation. Such experiments are at an early stage but, if successful, they can create what <a href="https://80000hours.org/podcast/episodes/vitalik-buterin-new-ways-to-fund-public-goods/">Vitalik Buterin</a> calls <em>&#8220;a general purpose infrastructure for funding public goods in the same way that money is a general purpose infrastructure for funding private goods.&#8221;</em></p><div id="youtube2-s0pxhgVpRL0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;s0pxhgVpRL0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/s0pxhgVpRL0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><a href="https://foresight.org/salon/glen-weyl-radicalxchange-social-technology-for-a-political-economy-of-increasing-returns/">Listen to Glen Weyl&#8217;s talk on how to develop political and economic systems that take more of the real social and political life of people into account</a>.</p><h1>Chapter Summary</h1><p>Various costs prevent us from cooperating creatively. It takes time, energy, and money to find each other, bargain an agreement, and enforce it. If only we could lower these costs, cooperation could be rich. The digital realm provides new arenas to reinvent cooperation. New media, property rights, and contract experiments are just the tip of the iceberg. Most of these experiments will fail but the few that succeed may usher in a new era of cooperation.&nbsp;It&#8217;s hard to see how to get there from here. In the next chapter, we&#8217;ll take the first steps.<br></p><h3>Next chapter: <a href="https://foresightinstitute.substack.com/p/genetic-takeover">GENETIC TAKEOVER | Cryptocommerce </a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Thank you to Chris Hibbert, who coordinated and maintained the market.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Miners have opportunistically delayed a message before. We should think about better approaches to that danger, especially as mining is increasingly centralized in a few Chinese nodes. Nevertheless, blockchain still bounds the problem. A message cannot stay censored for long because of its content, given a competitive market of miners. So even though there is some corruption opportunity on the edge, such as to front-run, the messages should always get serviced reasonably promptly. Pre-crypto, perhaps our best hope at creating trustworthy money was via competition and reputation feedback. But relying on reputation is <a href="https://blog.chain.link/">insufficien</a>t; reliable operation of money matters most in emergencies when stressed people make compromises regardless of their reputation. Bitcoin, and a growing number of crypto alternatives, are doing something better. By examining its internal workings, we know that the money itself operates in an incorruptible manner, regardless of external pressures.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>AMIX, The American Information Exchange, shown below, was an early prototype of a Split Contract. Designed back in the 1980s before the web browser, it was a computer-mediated market for matching information buyers and sellers. Up to then, finding information on a topic involved fishing for relevant bits of knowledge from the sea of newspapers, TV, journals, and books. AMIX replaced this random walk with a targeted market on which you could ask questions and those with relevant expertise could sell their answers. Barbara, an early AMIX user, wanted to hire a consultant to figure out if it makes sense to build a co-working space in Palo Alto. She defines what would count as a satisfactory answer and uploads the request to the exchange. If a consultant delivers an answer and thirty days go by without Barbara taking any action, the automated contract would pay him the full amount. To stop the payment, Barbara would have to explicitly state his document does not meet her terms. The contract would then take those contradictory propositions to the arbitrators. They would only get to overrule the amounts paid on accepting the document. Meanwhile, with paper contracts, the consultant could have given Barbara a perfectly fine document. She could refuse to pay, leaving him with the expensive legal action to get what she owes him.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qCON!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qCON!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 424w, https://substackcdn.com/image/fetch/$s_!qCON!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 848w, https://substackcdn.com/image/fetch/$s_!qCON!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!qCON!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qCON!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png" width="1456" height="676" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:676,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qCON!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 424w, https://substackcdn.com/image/fetch/$s_!qCON!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 848w, https://substackcdn.com/image/fetch/$s_!qCON!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!qCON!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff037146f-391b-4af8-9b89-602ad8d515f7_2204x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></div></div>]]></content:encoded></item><item><title><![CDATA[4. SKIM THE MANUAL | Intelligent Voluntary Cooperation & Paretotropism]]></title><description><![CDATA[Previous chapter: MEET THE PLAYERS | Value Diversity]]></description><link>https://foresightinstitute.substack.com/p/skim-the-manual</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/skim-the-manual</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:23:35 GMT</pubDate><enclosure url="https://cdn.substack.com/image/youtube/w_728,c_limit/MYJfntH0OEg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><a href="https://foresightinstitute.substack.com/p/meet-the-players">Previous chapter: MEET THE PLAYERS | Value Diversity</a> </h3><p><br>Having met our players, with their diversity of values, we can&#8217;t rely on aligning them on one grand strategy. Instead, to set civilization up well over the next rounds of play, we must build a playing field that can handle fundamental value differences.&nbsp;</p><h1>A Manual for Strengthening Civilization</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5oIz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5oIz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 424w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 848w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 1272w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5oIz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png" width="676" height="536.744" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1000,&quot;resizeWidth&quot;:676,&quot;bytes&quot;:283178,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5oIz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 424w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 848w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 1272w, https://substackcdn.com/image/fetch/$s_!5oIz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feceeb602-4020-47d0-bbc1-d1667d700531_1000x794.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Bob&#8217;s Preferences</strong></p><p>Imagine a game of civilization played by Alice and Bob. A world in which different players have different goals can be described in terms of preferences among future states. The center dot is the current state of the world that players Alice and Bob are in. The axes are the world states, organized by Alice's preferences vertically and by Bob's preferences horizontally. Bob prefers the green worlds to the current world.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bRtQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bRtQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 424w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 848w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 1272w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bRtQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png" width="696" height="569.0161662817552" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:708,&quot;width&quot;:866,&quot;resizeWidth&quot;:696,&quot;bytes&quot;:198175,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bRtQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 424w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 848w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 1272w, https://substackcdn.com/image/fetch/$s_!bRtQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45e948cd-24a3-4be2-bea8-2a05b2932ed8_866x708.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Positive Sum &amp; Negative Sum</strong></p><p>If we could extrapolate utilities from Alice's preferences and Bob&#8217;s preferences, we could say their interactions can lead to outcomes that have greater overall utility or smaller overall utility. Meaningfully comparing utilities across players will become more problematic the more diverse their futures get. But for now, let&#8217;s assume everything to the upper right of the red line are &#8220;positive sum&#8221; outcomes, and everything to the left are &#8220;negative sum outcomes&#8221;.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qu1C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qu1C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 424w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 848w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 1272w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qu1C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png" width="950" height="774" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:774,&quot;width&quot;:950,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:317029,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qu1C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 424w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 848w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 1272w, https://substackcdn.com/image/fetch/$s_!Qu1C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0cc2000-083b-4fe9-8de2-0c80f15e25e9_950x774.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Voluntary Cooperation</strong></p><p>There is a problem with simply seeking positive sum outcomes. If Bob would be worse off than he currently is, he would fight any attempt to get there. Likewise, Alice would fight the positive sum outcomes she likes less than the status quo. But if Alice and Bob are either equally or better off than they currently are, both have good reason to cooperate. Together, they can move to <em>Pareto-preferred</em> worlds. Situation B is Pareto-preferred to situation A if anyone prefers B to A, and no one prefers A to B. Those worlds can be reached by voluntary cooperation. For human players, we could say these interactions are &#8220;freely&#8221; consented to; for non-human players, they are simply based on their &#8220;internal logic&#8221;.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MyLj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MyLj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 424w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 848w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 1272w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MyLj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png" width="720" height="603.2432432432432" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15629540-91cf-4d6a-9159-66d9d960e256_888x744.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:744,&quot;width&quot;:888,&quot;resizeWidth&quot;:720,&quot;bytes&quot;:433195,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MyLj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 424w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 848w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 1272w, https://substackcdn.com/image/fetch/$s_!MyLj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15629540-91cf-4d6a-9159-66d9d960e256_888x744.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>&nbsp;Cooperation Across Humans&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</strong></p><p>Human similarities also come with the tendency to compare oneself to others, including strong fairness intuitions and envy reactions. If Alice&#8217;s gain is perceived as too unfair, only she would be invested into bringing about that future, even if, all else equal, Bob would have consented to the deal. The all-too-human tendency to compare ourselves to others may lead Bob to reject a Pareto-preferred deal. It narrows the scope of what the world&#8217;s human players can achieve by voluntary cooperation.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4JSH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4JSH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 424w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 848w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 1272w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4JSH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png" width="908" height="722" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:722,&quot;width&quot;:908,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:395157,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4JSH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 424w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 848w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 1272w, https://substackcdn.com/image/fetch/$s_!4JSH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30f415d8-92ba-49b7-a528-0f16252d4718_908x722.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Cooperation Across Intelligences&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</strong></p><p>Traditionally, the definition of an agent with utility assumes a comparability that future intelligent systems don't necessarily have going forward. Without meaningful metrics on which to compare utility across very different mind architectures, the diagonal red line, indicating positive and negative sum, disappears.</p><p>As long as players have goals and act as though they make choices, they will have revealed preferences. Those revealed preferences may be all we have when designing systems for players to reach their goals. Upholding voluntary cooperation could remain a stable common goal for both Alice and Bob across many rounds of future games, regardless of their intelligence. It&#8217;s all they need to unlock Paretotropian worlds that are better for each by their standards. </p><p>The rest of this book is about how to set civilization up for this path of <em>intelligent voluntary cooperation</em>. This path contains three components:</p><h2>1. Upholding Voluntarism</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!alBo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!alBo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 424w, https://substackcdn.com/image/fetch/$s_!alBo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 848w, https://substackcdn.com/image/fetch/$s_!alBo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 1272w, https://substackcdn.com/image/fetch/$s_!alBo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!alBo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png" width="700" height="593.0555555555555" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/abc9e69a-af3a-4800-a55e-3a519929916f_864x732.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:732,&quot;width&quot;:864,&quot;resizeWidth&quot;:700,&quot;bytes&quot;:389695,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!alBo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 424w, https://substackcdn.com/image/fetch/$s_!alBo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 848w, https://substackcdn.com/image/fetch/$s_!alBo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 1272w, https://substackcdn.com/image/fetch/$s_!alBo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fabc9e69a-af3a-4800-a55e-3a519929916f_864x732.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>&nbsp;Involuntary Positive Sum&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</strong></p><p>Imagine that Alice, seeking a positive sum arrangement that makes her vastly better off, explains to Bob: &#8220;I'll be more better off than you'll be worse off&#8221; and embarks on her way to the blue point in the diagram. Bob doesn't like this plan so we have a conflict. Not only do we have a conflict, but Alice expects that there will be this conflict and Bob expects that Alice will expect the conflict.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6x5k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6x5k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 424w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 848w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 1272w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6x5k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png" width="712" height="584.7489361702128" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:772,&quot;width&quot;:940,&quot;resizeWidth&quot;:712,&quot;bytes&quot;:387103,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6x5k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 424w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 848w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 1272w, https://substackcdn.com/image/fetch/$s_!6x5k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fd3b7a2-18b1-4605-834c-a4dda05ccec9_940x772.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Hobbesian Traps</strong></p><p>In expectation of Alice&#8217;s involuntary action, Bob may strike first. Alice, expecting this, will want to weaken Bob first. A cascade of mutually expected conflict can result in a Hobbesian Trap, where the mutual expectation of conflict creates a preemptive conflict<strong>.</strong> While cooperation is better for both sides, lack of trust or fear of defection can lead to first-strike instabilities, wars, and other terrible games. By reliably upholding voluntary interactions as Schelling Point, and signaling this to Bob, Alice can lessen her and Bob&#8217;s incentive to introduce and abuse precedents that could potentially spiral into Hobbesian Traps.&nbsp;</p><h2>2. Improving Cooperation </h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L9vM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L9vM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 424w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 848w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 1272w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L9vM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png" width="680" height="604.7926267281106" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:772,&quot;width&quot;:868,&quot;resizeWidth&quot;:680,&quot;bytes&quot;:475972,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L9vM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 424w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 848w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 1272w, https://substackcdn.com/image/fetch/$s_!L9vM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc790b0a3-a54b-4017-bebd-920ccf0140e1_868x772.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Single Anonymous Interaction&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</strong></p><p>Today&#8217;s world of cooperation is complex and involves anonymous situations where neither reputation nor contracts can grip. In situations where all players jointly prefer an outcome, but cannot get there by only interacting voluntarily, an obstacle is obstructing their path to Pareto-preferred worlds. To cross the obstacle, we need better tools for cooperation. </p><p>Imagine Bob gave Anonymous Alice 10 shekels for the promise of a Gourd. She is 10 shekels richer and he is 10 shekels poorer. Anonymous Alice would no longer have a reason to give Bob what he wants.&nbsp;Upon Bob inquiring where his Gourd is, Anonymous Alice may run off, quoting Hobbes, &#8220;For he that performed first has no assurance the other will perform after, because the bonds of words are too weak&#8221;. Bob, knowing this, would never give Alice the 10 shekels in the first place. What if Bob could lock up his 10 shekels in escrow that automatically pays Anonymous Alice when she proves that she has delivered the Gourd?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!y2No!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!y2No!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 424w, https://substackcdn.com/image/fetch/$s_!y2No!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 848w, https://substackcdn.com/image/fetch/$s_!y2No!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 1272w, https://substackcdn.com/image/fetch/$s_!y2No!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!y2No!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png" width="708" height="644.2627737226277" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:748,&quot;width&quot;:822,&quot;resizeWidth&quot;:708,&quot;bytes&quot;:406420,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!y2No!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 424w, https://substackcdn.com/image/fetch/$s_!y2No!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 848w, https://substackcdn.com/image/fetch/$s_!y2No!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 1272w, https://substackcdn.com/image/fetch/$s_!y2No!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d97e092-8213-48c8-8aa7-da0d1b65e8d6_822x748.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Improving Technologies of Cooperation&nbsp;&nbsp;&nbsp;&nbsp;</strong></p><p>A tremendous number of remaining problems are basically the same phenomenon writ large. If all Alices and Bobs in this world could find each other, they may be able to build bridges to Pareto-preferred worlds<strong>. </strong>In reality, collective action dilemmas are often phenomena in which 1000s, or millions of players interact simultaneously. Rather than being simple trades, they might be complex arrangements that unfold over time. Players can&#8217;t get to preferred worlds in a way that ensures all are better off at each step along the way. To tackle these problems, we need to innovate in technologies of cooperation and democratize their use. </p><h2><sup>3. Intelligizing Voluntary Cooperation</sup></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u0sm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u0sm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 424w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 848w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 1272w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u0sm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png" width="632" height="546.3050847457627" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:826,&quot;resizeWidth&quot;:632,&quot;bytes&quot;:60439,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u0sm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 424w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 848w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 1272w, https://substackcdn.com/image/fetch/$s_!u0sm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dc5ed9-a028-4686-ab00-ecceb6ba1857_826x714.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Civilization as Diagram with 7 Billion Dimensions</strong></p><p>When considering Alice and Bob, we need to remember that we&#8217;re actually looking at a 7+ billion dimensional diagram. Any particular interaction involves only a bounded set of aspects of the world and only a bounded number of participants. For each one of these interactions, there is a separate diagram<strong>,</strong> in which we can organize the possible states of the world.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wbLM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wbLM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 424w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 848w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 1272w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wbLM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png" width="890" height="764" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:764,&quot;width&quot;:890,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:439672,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wbLM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 424w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 848w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 1272w, https://substackcdn.com/image/fetch/$s_!wbLM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be11682-ab1a-46a7-98c0-3708d141b29f_890x764.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Voluntary Independence</strong></p><p>Imagine Carol and Dave are in some other part of the world. They have never heard of Alice and Bob, who have never heard of Carol and Dave. Let's rotate Bob out of the diagram and rotate Carol in. In parallel to Alice&#8217;s and Bob&#8217;s interaction, Carol is cooperating with Dave to bring about a world state in which Carol is better off. Even though each of these individual transitions hug the edges of the Pareto-box, collectively, they are taking orthogonal steps into the Pareto-preferred area.&nbsp;</p><p>In order for the right to choose to cooperate to be meaningful, we need the right to choose not to cooperate. Voluntary independence is actually most of the 7+ billion dimensional diagram. Most people don't know each other and most of their activities have no strong connection to most other activities in the world. </p><p>Thanks to the independence of the arrangements that form in a vast experimentation space, some arrangements can go forward when others get stuck. The system continuously selects for arrangements that create productive cooperation such that it is dominated by their beneficial results. While we should improve our arrangements, this process is about becoming better at aligning everyone&#8217;s expectations of increasing payoffs by actually increasing payoffs.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oPzr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oPzr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 424w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 848w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 1272w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oPzr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png" width="644" height="584.1662591687042" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:742,&quot;width&quot;:818,&quot;resizeWidth&quot;:644,&quot;bytes&quot;:510120,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oPzr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 424w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 848w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 1272w, https://substackcdn.com/image/fetch/$s_!oPzr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86ca35f2-a2b4-45a8-a612-8075a28551ed_818x742.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>&nbsp;Strengthening Civilization&#8217;s Paretotropism</strong></p><p>Plants have a phototropism, in that they grow toward the light. Civilization has no goals of its own. But it does have a dynamic, a <em>paretotropism</em>: In the same way that plants grow toward the light, civilization, emerging from our voluntary interactions with each other, progresses towards worlds that are generally better for everyone.<sup> </sup>By cooperating via voluntary choices, and otherwise leaving each other alone for voluntary independence, our civilization is climbing Pareto-preferred hills. </p><p>To continue on this trajectory in the future, we need to find to extend the architecture of voluntary cooperation that has emerged among humans to other intelligences which will soon dominate the playing field. </p><p>Intelligent voluntary cooperation relies on respecting the boundaries of other entities such that their reaction is based on their internal logic. For humans, we have long-evolved norms around respecting our corporeal boundaries. In today&#8217;s object-oriented programming, specialized computing entities are encapsulated so one object cannot tamper with another&#8217;s contents. Such existing examples of boundaries may provide useful guidance when designing the cooperation architectures for the next rounds ahead. Successfully integrated into our fabric of voluntary request making, new intelligences have the potential to turbocharge the paretotropism of our civilization.</p><h1>Intelligent Voluntary Cooperation: The Real World </h1><p>Having looked at a manual for <em>intelligent voluntary cooperation,</em> let&#8217;s see how the strategy works in the real world and which lessons to apply in the game ahead.</p><h2>1. Upholding Voluntarism</h2><h3>The Game So Far: Voluntary Schelling Points Emerge</h3><p>How has voluntarism shaped our civilizational game? Data on prehistoric societies suggests that our ancestors lived with extraordinarily awful rates of violence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Since then, violent deaths have decreased, first relative to population size, and more recently in absolute numbers. Major powers have not fought a physical war in a while, and while there are more civil conflicts, they bring fewer deaths.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>&nbsp;</p><p>The decline in violence hasn&#8217;t been perfect but it has gradually shifted the balance of our interactions from involuntary interactions towards voluntary ones. We increasingly have the freedom to say no to interactions without being forced into them.&nbsp;</p><p>With cooperation becoming a better strategy than violence to achieve our goals, it became beneficial to understand other peoples' goals so we can offer them cooperation opportunities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> If I need money, and want to sell you a product you want, it pays if I can imagine what kind of product you would enjoy. Effective cooperation, such as in commerce, involves creating situations in which others serving our goals also serves theirs. The instrumental goal of cooperating evolved into the felt goal of caring about others' goals, i.e. <em>empathy</em>. Global mass media may have helped extend our empathy by increasing our ability to put ourselves in the shoes of strangers. Seeing each other as fellow humans, rather than dangerous outsiders, further reduces the impetus for violence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>Think of one of the many bloody territorial wars across history. Often, not even victors come out net positive. &nbsp;All players have an interest in avoiding such costly struggles. All sides should want to find a point of mutual agreement without anyone feeling they&#8217;ve made too much of a concession. If they can agree a disputed territory has an obvious geographic marking, this common-knowledge landmark could be a good point to settle on. The river can serve as <em>Schelling Point</em> for coordination.&nbsp;</p><p>Thomas Schelling pioneered the idea that even without communication, we can often cooperate by converging on a focal point which we mutually expect to have prominence for the other.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> &nbsp;This is why many territorial boundaries are a mountain range or a river.&nbsp;</p><p>We find a similar situation in American politics today. Political factions are increasingly polarized, and neither side finds the U.S. Constitution to be an ideal document for advancing their cause. All parties have some interest in a Constitutional Convention to rewrite the Constitution to exactly serve their purposes. However, all parties are terrified of risking a Constitutional Convention, and we should expect strong lobbying against it. The potential outcome is sufficiently risky that, while the current framework is not ideal for any party, it is better than what they might end up being forced to accept. The U.S. Constitution is stable because all players recognize that the prospect of its instability is more frightening than living with what we now have.&nbsp;</p><p>Just as the river serves as a territorial Schelling Point, and the Constitution serves as a Schelling Point for U.S. governance, voluntarism is increasingly becoming a Schelling Point for our civilization. We all tend to gain by cooperating with each other and the more we rely on these dynamics to achieve our goals, the more we create feedback that keeps pushing the rules in that direction. Even if you find it is in your occasional interest to engage in involuntary interactions, your overall interest may well be to uphold the voluntary system if you expect to benefit from it. Weaker norms might better enable you to cheat but they also better enable everyone else to do so. The overall system is less likely to serve your own interests.</p><p>To the extent that we make exceptions for ourselves to rely on involuntary means when interacting with others, we establish precedents for interactions that overpower some players by non-agreed upon means.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> When voluntary Schelling Points are lost, we can spiral into Hobbesian traps, in which the mutual expectation of involuntary action leads to pre-emptive conflict. </p><p>Since the first strike instability during the Cold War we know that such traps can bring us to the brink of destroying the world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> World War I with its entrenched warfare situation was another trap in which both sides were suffering tremendously, but neither knew how to get out of the situation. We must avoid Hobbesian Traps at all costs.&nbsp;</p><h3>The Game Ahead: Multipolarity &amp; Compensating Dynamics</h3><h4>Multipolarity: More Like Natural Language, Less Like Governments</h4><p>The more players uphold voluntary interactions as Schelling Point, the more we can rely on it. Unipolar systems with a single dominant actor can arbitrarily breach voluntarism without fearing feedback. Power is diminished when divided, and so is the power to engage in involuntary interactions. Multipolar systems can keep each other in check.</p><p>Take natural language as the ideal example of a voluntary system emerging from multipolar interaction. Historically, there was no understanding of the hard difference between the concepts of heat and temperature, or mass and weight.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> Instead, there was a cloud of words that sort of meant the same vague cloud of meanings. When people tried to reason precisely about this heat and temperature cloud of meaning, they were wrapped up in intellectual knots.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Distinguishing between heat and temperature, allowed people to pick words from these clouds to nucleate them into different sounds of a newly explained concept. Sometimes it goes the other way, and concepts thought unrelated suddenly become connected. For instance, &#8220;entropy&#8221; is a synthesis of signals from engineering and thermodynamics.&nbsp;&nbsp;</p><p>The coherence of natural language does not emerge from the top-down decision of an authoritative governance structure. Instead, it is a continuous re-negotiation with words as Schelling Points for meaning.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> The evolution of words has both a drive to coherence, where we mean the same thing, and sudden switching events, as we discover new concepts, create new coinages, vocabulary and jargon. Language is a beautiful, spontaneous order that emerges from the interaction of many players under the incentive of wanting to be understood by each other.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> &nbsp;</p><p>If language is a good example of extreme decentralization, markets are in an intermediate category. They don&#8217;t have decision-making power for the overall system but neither do they emerge only from individuals contracting directly with each other. Corporations are the dominant entities. They have internal centralized decision-making powers coexisting with a lot of bottom-up activity, and are themselves embedded within a multipolar world of markets. While individual large corporations are often seen to have a degree of agency, their overall market activity is still emergent through many parties interacting to try to influence the process.&nbsp;</p><p>Let&#8217;s go up one layer of centralization to governments. In the U.S., power is decentralized in multiple instances via the Constitution. First, the Founding Fathers left most power in the hands of individuals and didn't centralize it in government. Second, they left most of the remaining governmental power to the states, not the federal government. Finally, the federal government power was divided among the legislative, judicial, and executive branches, and restricted via the Bill of Rights constitutional amendments. The founders were not confident this balance would be stable and there is no guarantee it will be. Today, we see formerly-decentralized decisions being gradually transferred from the states to the federal level.&nbsp;</p><p>If we want cooperative partners to bring their local knowledge to bear and keep each other in check, any arrangement that can be more decentralized should be. This is embodied in the <em>subsidiarity </em>principle<em>; </em>when seeking to come to joint decisions, we should do so at the smallest scale necessary for a decision. Subsidiarity would not only favor decentralizing decisions to the state level instead of national level. It also means strengthening the cities&#8217; role where most of us directly experience decision consequences. Within cities, subsidiarity would encourage decisions at the fine-grained local community level with fluid processes and reciprocity.</p><p>A move away from dominant hierarchies can be very beneficial. Robert Sapolsky shows how, in a baboon troop in which dominant violent males died due to an accident, the lower-ranking baboons, rather than reassembling traditional hierarchical pathologies, remained more peaceful and cooperative over the long-run.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> If baboons can do it, human players should be able to. The more we use multipolar interaction architectures (like languages) and less unipolar ones (like governments), the more we can rely on each other to uphold systems of voluntary cooperation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a>&nbsp;&nbsp;</p><h4>Compensating Dynamics: From Peace of Westphalia to Crypto Nations</h4><p>History shows we can work against strong pressures to centralize. The Peace of Westphalia stopped the notoriously bloody thirty year religious war in Europe. Each warring state sought to involuntarily impose their values onto the other side.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> All parties paid huge costs for following their own perceived best course of action. Finally, in 1648, they negotiated a peace allowing each nation to determine religious practices on their side of the boundary with little interference. Spain and France were assigned the role of guarantor powers that were, by treaty, obligated to defend the constitution of the Holy Roman Empire. They could be called upon by anyone injured according to the Peace. Instead of just providing friction against power centralization, it introduced a feedback loop that actively corrected and pulled the system away from centralization.&nbsp;</p><p>The Peace of Westphalia lasted for centuries and it is sometimes regarded as one of the first applications of th<em>e collective security principle</em> in International Relations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> In a collective security arrangement, &#8220;<em>each state in the system accepts that the security of one is the concern of all, and therefore commits to a collective response to threats to, and breaches to peace</em>&#8221;. When one state in a pact gains power that threatens the peace, other states cooperate collectively against them. The actions of the rest restore the balance and provide security for all.</p><p>What is preferred to withstanding centralization? Actively self-correcting for it. If we realize early that part of an arrangement becomes a threat, a coalition of powers can proactively cooperate to restore equilibrium. In a system in which participants understand it is in their interests to preserve peaceful multipolarity, self-correcting dynamics can be built in. This is more than a system of &#8220;robust multipolarity&#8221;, where multiple players keep each other in check by watching each other. Systems in which participants cooperate to compensate for power grabs can create an &#8220;antifragile multipolarity&#8221; that gets stronger under stress.&nbsp;</p><p>Charter cities and seasteading ambitions seek to create meatspace alternatives to national players. Cyberspace offers another arena to compensate for state power through the creation of transnational communities that are entirely voluntary. It has an almost perfect realization of a neutral simple rules framework to experiment with.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> Some already use this to prototype new forms of self-government in virtual communities, which increasingly become the realm in which we uphold each other&#8217;s rights.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> One day, crypto nations and network states, such as 1729, may allow their members to collectively negotiate with existing legal jurisdictions, expanding compensating powers from the digital to the physical world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a></p><div id="youtube2-P5UAtAOV66c" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;P5UAtAOV66c&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/P5UAtAOV66c?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Balaji Srinivasan popularized the idea of actualizing digital communities into the physical world through network states. Watch his <a href="https://foresight.org/salon/balaji-s-srinivasan-the-network-state/">Intelligent Cooperation seminar</a>.</p><h2>2. Improving Cooperation</h2><h3>The Game So Far: Rights, Contracts, Prices, and Institutions</h3><p>How did we get from a civilization based on voluntary interactions to today&#8217;s sophisticated ways of cooperating, including property rights, contracts, and prices? Civilization is made up of many players pursuing many goals. Many of these plans make use of resources which are scarce. To pursue our goals, we had to gradually learn how to coordinate around a limited amount of resources. Property rights solve this problem. They confer the rights <em>to</em> things, which really is a proxy for the rights <em>to</em> <em>do</em> things. At first, a simple theory of accounting sufficed. Alice could tell her neighbor, Bob: &#8220;this cow is mine if it is on my property. In exchange for some fruits, I make it yours by moving it to your property&#8221;.&nbsp;</p><p>But human beings are linguistic creatures, so rights about physical goods could be repackaged to more abstract arrangements. The transferability of physical goods created a model in our minds of what it means for something to be property. Once we had this notion, we could take less literally physical items, declare them to be property, and create abstract arrangements such as contracts to manipulate them. Thousands of years of the evolution of human institutions have given us technologies of commitment, such as reputation and contracts, to bind ourselves to mutually preferred agreements.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> Alice could tell Bob: &#8220;give me your fruits now and I promise to give you my cow tomorrow&#8221;.&nbsp;</p><p>Pairwise barter deals where one good is exchanged for another for mutual benefit are hard to find and leave lots of value on the table. Because continuing to be a participant in a particular contract is itself valuable, that right can be treated as property for other contracts again. Bob could tell his neighbor Carole: &#8220;give me your grains, and you&#8217;ll get Alice&#8217;s cow when I get it in exchange for my fruits&#8221;.&nbsp;</p><p>Such large multi-way deals would yield the full benefit of trade, but are difficult to negotiate. Currency allows the equivalent of multi-way deals via separate pair-wise trades. Prices can represent a summary of individual valuations of a good, the demanders&#8217; use values and the suppliers&#8217; costs. Instead of running back and forth between Alice and Carole, Bob can pay for the promise of Alice&#8217;s cow and use that currency to buy Carole&#8217;s grains.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a></p><p>Via contracts, institutions for manipulating rights evolved that allow more parties to cooperate creatively. Gradually, markets emerged around the transferability of rights. By recognizing the contract participation right as property other contracts can use, we enable networks of contracts composed of contracts. This dynamic of higher order composition gave rise to today&#8217;s complex cooperation ecosystem. These arrangements were possible only by building more abstract arrangements from within a working system. Now we can reinvent them.</p><h3>The Game Ahead: Polycentrism and Out-Competing</h3><h4>Forging Polycentric Bonds: Open Source&nbsp;</h4><p>A minimal set of voluntarism can lay the groundwork for constructing and joining cooperative arrangements with very complex rule sets.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> For instance, for managing common streams in the Swiss Valais, everyone in the community has a right to take from the stream, in return for a few days of maintenance work each year. These Swiss communities are small with high reciprocity and access control in the form of mountains.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> In civilization at large, we cannot casually eject people, so we find small-scale commons within larger scale units, with arrangements cutting across both. Having multiple poles of decision making that are independent of each other is often called &#8220;polycentrism&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a> Elinor Ostrom pioneered the concept by showing that it can help solve various collective action problems.&nbsp;</p><p>But new cooperative arrangements come with their own pathologies, incentivizing people in unforeseen ways. Take today&#8217;s corporations as an example. A corporation has few ways to create an internal project with the guarantee that it won&#8217;t be canceled by its management. There are almost no formal means within the corporate governing structure and, if promised informally, there is no way of holding the company to it. Management can change their minds, and they seldom can deny themselves the right to do so. In response, many projects form as a collaboration between employees in multiple companies who use contracts to make credible commitments. Each of them think it is now much more credible that the project will stick around.&nbsp;&nbsp;</p><p>Open source is an extreme case; if the company kills it, you can leave the company and continue working on it. Prior to open source, talented technologists have repeatedly put some of their best intellectual effort into a proprietary project that was canceled. They realized that projects which they were willing to start under those conditions weren&#8217;t the ones worth starting. With open source, companies can make a credible promise to the employees that they can keep working on a project even if the company loses interest. In turn, talented employees pour their heart and soul into the project.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a> Making binding promises is an important means of cooperating. Rather than imposing a simple top-down model that makes this difficult, cooperative arrangements like open source cut across different circles.&nbsp;</p><p>By joining different hierarchies in a variety of cross-cutting circles, we can compensate against arbitrary power in any one of them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a> Being part of overlapping communities is more manageable for our tendency to compare ourselves to our peers: Bob may be less affected by the grandiosity of Alice&#8217;s Dyson Sphere, when his wildlife spotter fan club praises him for preserving a pristine Platypus specimen.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a> If you can seek social worth through different roles in different circles, violence may be less common than if all of your self-worth depends on one pecking order. So even if it can&#8217;t solve all collective action problems, polycentrism has other benefits by making it easier to engage in binding commitments, and harder to engage in involuntary actions.</p><h4>Out-Competing Sub-Optimal Institutions: Education&nbsp;</h4><p>Every sub-optimality in our civilization should be an invitation to innovate&#8212;without upsetting voluntary Schelling Boundaries. Let's take education as a collective action problem. Scott Alexander diagnoses that &#8220;<em>students&#8217; incentive is to go to the most prestigious college they can get into so employers will hire them&#8212;whether or not they learn anything. Employers&#8217; incentive is to get students from the most prestigious college they can so that they can defend their decision to their boss if it goes wrong&#8212;whether or not the college provides value added. And colleges&#8217; incentive is to do whatever it takes to get more prestige &#8212;whether or not it helps students.&#8221;</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a><em>&nbsp;</em></p><p>When noticing an apparently maladaptive feature which has been part of a society growing and increasing in wealth and knowledge, let&#8217;s first stop and consider alternative functions. This avoids tearing down <em>Chesterton&#8217;s Fence</em>, the canonical example of a feature with an important function not understood by those seeking to remove it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-28" href="#footnote-28" target="_self">28</a></p><p>American education is undoubtedly in a sub-optimal state. Nevertheless, the U.S. university system is still where most of&nbsp; the world is eager to send their students. Much of the U.S.' actual knowledge growth has made use of its system of teaching and research, even if not in the ways we commonly think. Part of good universities&#8217; attraction is prestige and trademarking.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-29" href="#footnote-29" target="_self">29</a> Even if a good school teaches a student nothing, graduating from one whose entrance criteria are predictors of intelligence is a signal. Especially since students from good schools spend years socializing with other students who went through the same filter. They received some education and a network whether their knowledge was actually increased by their classes or not.</p><p>While these are not official university functions, at least trademarks are not completely random. Noticing this divergence between claimed and delivered benefits is an innovation opportunity. The Thiel Fellowship has convinced high-potential students at good universities to join a prestigious entrepreneurial program instead. Having been accepted to the university first, they can still use this as a signal. This works so well in Silicon Valley that &#8220;dropping out&#8221; is becoming a positive signal itself.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-30" href="#footnote-30" target="_self">30</a>&nbsp;</p><p>Rather than rolling back evolved institutions, newly created systems can out-compete existing ones. Until we innovate by deploying our insight in the real world, we don&#8217;t know in how many ways it is wrong. If we underestimated how good the existing institutions were for reasons we didn&#8217;t understand, then it is good that our attempts to supersede them using voluntary cooperation failed. We want those attempts to fail.&nbsp;</p><p>Even if we don&#8217;t solve a particular collective action problem via voluntary means, at least we are making poor choices among Pareto-preferred paths. If instead we could destroy institutions we didn&#8217;t appreciate by something more forceful than voluntary cooperation, we risk destroying hard-gained value, perhaps irrevocably so, and open up the door to Hobbesian Traps. Given what&#8217;s at stake in civilization, our goal needs to be depessimizing, not optimizing.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-31" href="#footnote-31" target="_self">31</a>&nbsp;</p><div id="youtube2-bKjM9XDx2jU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;bKjM9XDx2jU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/bKjM9XDx2jU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Curious for more? Check out <a href="https://foresight.org/salon/dr-vernon-smith-theory-of-price-discovery-in-markets-a-mcafee-civilizational-progress/">Price Discovery in Markets &amp; Civilizational Progress</a>. </p><h1><em>3. Intelligizing </em>Voluntary Cooperation</h1><h3>The Game So Far: Civilization Serves Our Interests</h3><p>From our vantage point, it&#8217;s hard to wrap our head around the many wonderful developments that got us where we are today. Circa 1820, nine in ten people on the planet lived on less than $1.90 per day. In 2015, this was only around one in ten. As the population increased, at first, only the ratio of people in extreme poverty fell. But more recently, the actual absolute number of people in poverty is plummeting faster and faster while our overall population continues to grow.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-32" href="#footnote-32" target="_self">32</a>&nbsp;</p><p>We are making progress on so many fronts. Modern hygiene, better nutrition and medicinal advances means fewer children die of starvation and the elderly can rejoice in living healthier lives.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-33" href="#footnote-33" target="_self">33</a> We are rapidly becoming more educated; not only are more kids attending school for longer, but in most countries, almost the entire population is literate now.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-34" href="#footnote-34" target="_self">34</a> Countless other achievements could be added, such as greater personal autonomy and fun, or improved communication and travel.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-35" href="#footnote-35" target="_self">35</a>&nbsp;</p><p>Recently, obstacles to our ascent have become more apparent. Some worry that the rate of scientific progress is slowing down, and we are entering a &#8220;great stagnation&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-36" href="#footnote-36" target="_self">36</a> Others worry that even as technology and productivity improve, prices are rising in areas such as healthcare and education.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-37" href="#footnote-37" target="_self">37</a> While - thanks to a mix of competition and digitalization - key air pollutants, such as sulfur dioxide, are in decline, pollution continues to harm health and livelihoods, displacing humans and other species.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-38" href="#footnote-38" target="_self">38</a> Technological progress coupled with increasingly global reach also means increasing the potential reach of technologies&#8217; violent uses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-39" href="#footnote-39" target="_self">39</a> Those are tricky problems. Nevertheless, in broad strokes, civilization has been rapidly getting better at serving our interests.</p><p>If you&#8217;re surprised to hear this, you&#8217;re not alone.&nbsp;A 2016 poll found only eight in a hundred U.S. residents knew that global poverty declined in the previous 20 years.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-40" href="#footnote-40" target="_self">40</a> No wonder; it&#8217;s hard to appreciate how incremental positive developments add up across history. You weren&#8217;t alive yet when, in 1900, almost half of US households had more than one occupant per room, only every fourth household had running water, and only six in one hundred kids graduated high school. Since then, U.S. living standards have increased at least five-fold.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-41" href="#footnote-41" target="_self">41</a>&nbsp;</p><p>But expectations adjust, and the remaining brutal aspects of everyday interactions stand out. Scott Alexander captures this sentiment well: <em>&#8220;Fit companies&#8212;defined as those that make the customer want to buy from them&#8212;survive, expand, and inspire future efforts, and unfit companies&#8212;defined as those no one wants to buy from&#8212;go bankrupt and die out along with their company DNA. The reasons Nature is red in tooth and claw are the same reasons the market is ruthless and exploitative.&#8221;</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-42" href="#footnote-42" target="_self">42</a></p><p>If this sounds intuitive, consider an alternative perspective. Zooming in on evolution shows the differences between today&#8217;s civilization and nature&#8217;s predatory principles: &#8220;<em>To survive, animals must eat animals or plants; this happens most often through predation. Producers, unlike prey, will voluntarily seek out those who want to consume what they have, using advertising, distribution networks, and so forth. Consumers, less surprisingly, will seek out producers [...] by reading advertising, traveling to stores, and so forth. The symbiotic nature of this interaction is shown by the interest each side has in facilitating it.[&#8230;]In human markets (as in idealized markets) producers within an industry compete, but chains of symbiotic trade connect industry to industry.&#8221;&nbsp;</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-43" href="#footnote-43" target="_self">43</a></p><p>In today&#8217;s civilization, competitors compete at better cooperating with the rest of the world. While competition is the norm within industries, chains of symbiotic trade are the norm across them. But we tend to find uncommon phenomena interesting and common phenomena boring, and so focus on unusual predatory behavior and crime over boring symbiosis in trade and markets.&nbsp;</p><p>This tendency, let&#8217;s call it an <em>outlier inversion,</em> is amplified by the media reporting on interesting cases of crime and embezzlement rather than on positive trends, such as the daily massive satisfactory turnover of goods. When pointing out how far we still have to go, we can start with appreciating that we&#8217;ve come a long way. Progress occurs even when unnoticed. But noticing it seems an important first step towards accelerating it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-44" href="#footnote-44" target="_self">44</a></p><p>The crucial thing in common is that our achievements are attractive from many perspectives, regardless of what one values and which goals one pursues: Longer lives provide more time to pursue goals, improved health provides more energy to do so, and better education and more freedom improve one&#8217;s chances of achieving them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-45" href="#footnote-45" target="_self">45</a> Gradually and imperfectly, civilization is climbing Pareto-preferred hills: &#8220;<em>If pairwise barter amounts to Pareto-hill-climbing across a rough terrain with few available moves; trade in a system with currency and prices amounts to hill-climbing across a smoother terrain with many available moves</em>&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-46" href="#footnote-46" target="_self">46</a> Tyler Cowen compares our modern civilization with a Crusonia plant, a mythical automatically-growing crop that generates more output each period. The more crops, the more there is to go around for different players to pursue their goals.&nbsp;</p><div id="youtube2-MYJfntH0OEg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;MYJfntH0OEg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/MYJfntH0OEg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Listen to Tyler Cowen&#8217;s seminar on <a href="https://foresight.org/salon/tyler-cowen-george-mason-university-stubborn-attachments/">civilization progress and stumbling blocks</a>. </p><h2>The Game Ahead: Boundaries and Composition </h2><h3>Set Voluntary Boundaries: Institutions &amp; Computers</h3><p>The evolution of complex institutions starts with our need to create plans in ignorance of the plans other players are creating. Rather than incentivization alone, a main bottleneck for civilization is coordination around limitations of knowledge. Even if we all wanted to maximally cooperate with each other, we would still mostly be ignorant about&nbsp;how best to do this. Cooperation brings local knowledge to bear for intelligent problem-solving:</p><p>If you want to have a package delivered, you walk into a post office, hand the clerk on the other side of the counter a package, tell her where it should go, give her the money, and the package is delivered to its specified destination. The clerk does not need specialized knowledge that you are sending your father a birthday present. Likewise, the package delivery service knows about trucks, airplanes, schedules, and barcodes that you don&#8217;t need to know. A lot of specialized knowledge on each side is abstracted through the common interface of a package delivery service.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-47" href="#footnote-47" target="_self">47</a>&nbsp;</p><p>The package delivery service functions as an abstraction boundary over two variable factors; why someone wants to deliver a package and the multiple means to deliver it. It can convert specific relationships into abstract relationships, and each side&#8217;s local knowledge can be composed into more complex problem-solving abilities. In addition to traditional institutions, we rely heavily on these less formal day-to-day institutions to cooperate.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-48" href="#footnote-48" target="_self">48</a> When such successful templates for cooperation emerge, they reduce the costs of reusing working solutions. We can draw on ever wider range of knowledge without having to acquire it ourselves.&nbsp;</p><p>The benefit of composing local knowledge into more intelligent systems is only too familiar to software engineers: In the early days of software engineering, they realized big problems need to be broken down into smaller ones and the plans to solve them. But if each plan can draw on any program resources, any memory address, any bit of data, etc., the result is a massive interference problem that creates bugs everywhere.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-49" href="#footnote-49" target="_self">49</a> Knowledge locality in the face of plan complexity created a reliance on specialized objects. By dividing resources into portions which can have separately held rights, we can formulate parts of the plan in confidence it can unfold without arbitrary interference from other plans using those same resources.&nbsp;</p><p>Complicated programs are broken down into simpler objects that engineers know how to build. Networks of objects communicate to exchange information. The information carried by each object is encapsulated so one object cannot tamper with another&#8217;s contents. These boundaries create independence across objects but also allow for them to be composed into architectures of cooperation. When the request-maker sends its request, the recipient responds only according to its internal logic. Via such voluntary request-making, separate objects can solve a small component of a problem, and combine their knowledge into much greater problem-solving ability.&nbsp;</p><p>Composability is key in this process. In foundational mathematics, a predicate operating on numbers is a first order predicate, while a predicate that operates on first order predicates is a second order predicate. Unlike in foundational mathematics, in programming, higher order predicates don&#8217;t have to have a particular place in the order of predicates. They can just operate on higher order predicates without restriction or stratification. This lets us build abstraction layers on top of abstraction layers, which can both manipulate objects and be manipulated by objects. It&#8217;s thanks to this generic paramaterizability that we can build increasingly complex ecosystems of objects manipulating other objects.</p><h3>Compose Local Knowledge: Markets &amp; APIs</h3><p>Object-oriented programming and human cooperative systems have many parallels. Similar to rights that can get composed via contracts into ever richer cooperative agreements, networks of computational entities can be composed into richer computation. The package delivery service with a structure of ritualistic interaction is similar to an API in object-oriented programming. Its primary function is to enable us to coordinate, despite our ignorance of all the specialties taken into account by the other side of that counter. This is similar to how an API coordinates sub-programs with specialized knowledge and composes them into an overall system with better problem-solving abilities.</p><p>In both cases, the bottleneck is not conflict of interest but the locality of knowledge. Just as the package clerk and you want to cooperate but need the postal service abstraction boundary to find out how, in object-oriented programming, abstract interfaces (APIs) explain how concrete objects can make requests of other concrete objects across abstraction boundaries. Well-designed systems compose local knowledge. By creating boundaries between request-making and request-receiving sides, selection pressures can operate on both sides. As systems adapt and grow, local knowledge is composed in more intelligent ways.&nbsp;</p><p>This applies to primitive computer systems that utilize more knowledge than any one of the sub-components could, to systems that we can start to call intelligent, to market processes, institutions, and large scale human organizations, up to our entire civilization.&nbsp;</p><h3>Increase Civilization&#8217;s Superintelligence: Economic Computation</h3><p>Civilization is a network of entities with specialized knowledge, making requests of entities with different specializations. Just as object oriented programming creates an intelligent system by coordinating its member specialists, our institutions evolved to increase civilization&#8217;s adaptive intelligence by coordinating its member intelligences. We are essentially the objects within human society&#8217;s problem-solving machine, which takes into account vastly more knowledge than any one of us could possibly possess.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-50" href="#footnote-50" target="_self">50</a>&nbsp;</p><p>To illustrate this point, Leonard Read traces back a pencil&#8217;s development process, from tree through production line to coloring to final product. While no one person knows how to make a pencil and most of the thousands of people involved in producing it don&#8217;t know of each other, live in different countries, speak different languages, they nevertheless cooperate to produce a pencil, one of the thousands of items we take for granted in our lives. Friedrich Hayek summed up this dynamic as &#8220;<em>civilization begins when the individual in the pursuit of his ends can make use of more knowledge than he has himself acquired.&#8221;</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-51" href="#footnote-51" target="_self">51</a>&nbsp;</p><p>The shared parallels between human and computing systems was appreciated by influential computer programmers in the field; Bertrand Meyer relied on the concept of interfaces as contracts<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-52" href="#footnote-52" target="_self">52</a>, Alan Kay explained his approach to computing in terms of machines talking to each other,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-53" href="#footnote-53" target="_self">53</a> and Carl Hewitt was very much inspired by Karl Popper&#8217;s idea of how knowledge composition in the scientific community works.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-54" href="#footnote-54" target="_self">54</a> The Agoric papers sum up that &#8220;<em>like all systems involving goals, resources, and actions, computation can be viewed in economic term</em>s&#8221;.&nbsp;</p><p>If computing approaches inspired by human cooperation helped unlock many of today&#8217;s software engineering successes, they are a good start for building future human and computer arrangements. Once we improve cooperation across individual humans, we can scale it further down to individual computational processes and objects. We should expect market responsiveness to move further down into the automated objects interacting with both each other and ourselves. It is hard to imagine what is possible when we redesign system rules from the perspective of being objects inside the system. We can already guess the next levels of the game will reproduce composability of human commerce.&nbsp;</p><h2>Chapter Summary</h2><p>When we play the game of civilization within a framework of simple rules, interesting patterns emerge. Today&#8217;s market economy is such a pattern, emergent from rights and contracts constructing the rules of the game. The historical decline of violence shifts interactions to voluntary ones that tend to serve the player's goals. Over time, civilization is becoming more intelligent by developing abstraction boundaries&#8212;from APIs to institutions&#8212;that coordinate local knowledge into an astounding problem-solving ability. Civilization becomes increasingly superintelligent and aligned with our interests. By exploring and reinventing the underlying rules from within the game, we can unlock new levels of cooperation.&nbsp;</p><h3><br>Next chapter: <a href="https://foresightinstitute.substack.com/p/improve-cooperation">IMPROVE COOPERATION | Info, Money, New Rights to Do Things</a><br></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Some analyses, such as Max Roser&#8217;s <a href="https://slides.ourworldindata.org/war-and-violence/#/2">War and Violence</a> show that the percentages of individuals killed by violence range from about 60% to less than 5% in both prehistoric and nonstate societies. While a large range, in 2007 just 0.04% of deaths in the world were from international violence. This suggests our 2007 world was at least an order of magnitude safer than most prehistoric societies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See Max Roser&#8217;s <a href="https://ourworldindata.org/grapher/battle-related-deaths-in-state-based-conflicts-since-1946">Battle-related Deaths in State-based Conflicts</a> and Steven Pinker&#8217;s <a href="https://stevenpinker.com/publications/better-angels-our-nature">Better Angels of Our Nature</a>, in which he defines the +70 years of peace since WWII as the &#8220;Long Peace&#8221;.&nbsp; Some scholars, such as Cirillo and Taleb in <a href="https://www.fooledbyrandomness.com/longpeace.pdf">What Are the Chances of a Third World War</a>, worry that the "Long Peace'' may merely be a gap between major wars, since they seem to occur once a century.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://stevenpinker.com/publications/better-angels-our-nature">Better Angels of Our Nature</a> by Steven Pinker.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The human felt sense of empathy is useful but not necessary for cooperation. We just need a framework in which future entities interact that leads them to cooperation to make them better off. Perhaps future entities will have the cognitive capacity to pursue the instrumental goal of cooperation without constructing an extended sense of empathy.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>See Thomas Schelling&#8217;s <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674840317">The Strategy of Conflict.</a> To see how, imagine the Split Money Game in which you and another player have to share $100. You are not allowed to communicate but have to write down how much you claim. If your claims add up to $100 or less, both of you get their specified amount. If the sum is higher than $100, both of you get nothing. What would you choose? Chances are you choose $50. Why? Because you can reasonably expect it to be a unique focal point for the other players.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>For instance, the 1976 exception allowing the US government to override citizens&#8217; rights in a state of emergency resulted in 35 active emergencies in 2020, each renewed annually by the president. There are exemptions that warrant a breach of voluntarism. Nevertheless, for any state in which an involuntary action could lead to a Pareto-preferred world, it is often possible that an alternative action can bring it about voluntarily.&nbsp; <a href="https://www.fitz-claridge.com/taking-children-seriously/">Taking Children Seriously</a>, a child-education philosophy, holds that it is possible to raise even children, often treated as agents incapable of consent, without doing things against their will. Instead, &#8220;<em>parents and children work to find a solution all parties genuinely prefer to all other candidate solutions they can think of</em>&#8221;.&nbsp; Finding common preferences can be hard but where they are found, there is no coercion that can spiral into Hobbesian Traps.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><a href="https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704">Doomsday Machine</a> by Daniel Ellsberg.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><a href="https://www.taylorfrancis.com/chapters/heat-temperature-one-marianne-wiser-susan-carey/e/10.4324/9781315802725-16">When Heat and Temperature Were One</a> by Marianne Wiser and Susann Carey.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Mark S. Miller&#8217;s work on access control, discussed in Chapter 7, is based on a similar intention to disambiguate &#8220;permission&#8221; versus &#8220;authority&#8221;.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>In <a href="https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274">The Meaning of Centralization</a> Vitalik Buterin points out that, even though they try, in the binding of words to meanings, no decision-making organization controls how people actually choose to speak.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p><a href="https://stevenpinker.com/publications/language-cognition-and-human-nature-selected-articles">Language, Cognition, and Human Nature</a> by Steven Pinker.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p><a href="https://www.amazon.com/Behave-Biology-Humans-Best-Worst/dp/1594205078">Behave</a> by Robert Sapolsky.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Sam Butler proposes the more granular distinction with natural language, followed by markets, followed by the Swiss federation, followed by US federalism, followed by Oligarchy/Plutocracy, followed by Monarchy/Dictatorship.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>See Henry Kissinger&#8217;s <a href="https://www.amazon.com/World-Order-Henry-Kissinger/dp/0143127713">World Order.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>See Wikipedia&#8217;s <a href="https://en.wikipedia.org/wiki/Collective_security">Collective Security</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>See Mark Miller&#8217;s <a href="https://www.youtube.com/watch?v=kOFzisF7aNw">Computer Security as the Future of Law</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>See Chip Morningstar&#8217;s <a href="http://www.fudco.com/chip/lessons.html">The Lessons of LucasFilm&#8217;s Habitat</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>See Balaji Srinivasan&#8217;s <a href="https://1729.com/">1729</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>See Robert Axelrod&#8217;s <a href="https://www.amazon.com/Evolution-Cooperation-Revised-Robert-Axelrod/dp/0465005640">The Evolution of Cooperation</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p><a href="https://digitalcommons.chapman.edu/cgi/viewcontent.cgi?article=1304&amp;context=esi_working_papers">Adam Smith&#8217;s Theory of Value</a> by Vernon Smith.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>See Robert Nozick&#8217;s <a href="https://archive.org/details/0001AnarchyStateAndUtopia">Anarchy, State, Utopia</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>See Elinor Ostrom&#8217;s <a href="https://www.cambridge.org/core/books/governing-the-commons/analyzing-longenduring-selforganized-and-selfgoverned-cprs/DABCDC0E47B234610BE5EFE959583404">Governing the Commons</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>See Elinor Ostrom&#8217;s <a href="https://web.pdx.edu/~nwallace/EHP/OstromPolyGov.pdf">Beyond Markets and States</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p>With the transition to remote virtual work, the nature of work is becoming more polycentric. With physical travel to an office, you can only be in one place at a time. Whereas if you're working from home over videoconference, there is no such exclusivity constraint. The normal employment relationship may move to multiple jobs for multiple organizations, blurring the circles of employment and independent consulting, and with that their hierarchies with various kinds of rewards.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p>See Gwern&#8217;s <a href="https://www.gwern.net/The-Melancholy-of-Subculture-Society">The Melancholy of Subculture Society</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>Virtual communities may also help curb polarization because people are less likely to encounter random extreme positions of the out-group in the wild but may gradually get to know pieces of it through overlapping communities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p>See Scott Alexander&#8217;s <a href="https://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Meditations on Moloch</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-28" href="#footnote-anchor-28" class="footnote-number" contenteditable="false" target="_self">28</a><div class="footnote-content"><p>In <a href="https://www.amazon.com/Thing-G-K-Chesterton-ebook/dp/B0046LV37S/ref=sr_1_1?dchild=1&amp;keywords=the+thing+chesterton&amp;qid=1612720241&amp;s=digital-text&amp;sr=1-1">The Thing</a>, Gilbert K. Chesterton introduces this principle by analogizing institutions with a fence erected across land: &#8220;<em>The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.</em>&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-29" href="#footnote-anchor-29" class="footnote-number" contenteditable="false" target="_self">29</a><div class="footnote-content"><p>See Robin Hanson and Kevin Simler&#8217;s <a href="https://www.elephantinthebrain.com/">Elephant in the Brain</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-30" href="#footnote-anchor-30" class="footnote-number" contenteditable="false" target="_self">30</a><div class="footnote-content"><p>Pr&#243;spera in Honduras is a nascent charter city experimenting with 3D property rights, modular construction of homes, common law regulatory options, and education and healthcare options patchworked from successful countries. The Nevada state government is considering legislation that would let companies form &#8220;Innovation Zones&#8221; with county-level governmental powers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-31" href="#footnote-anchor-31" class="footnote-number" contenteditable="false" target="_self">31</a><div class="footnote-content"><p>Nassim Taleb&#8217;s term &#8216;antifragile&#8217;,&nbsp;introduced in <a href="https://www.amazon.com/dp/B009K6DKTS/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1">Antifragile</a>, describes systems that grow stronger under pressure rather than just being resilient to pressure. The question we will address in future chapters is about our system&#8217;s proclivity for disaster. We expect antifragile systems also to be less prone to disaster. But there may be other systems that don&#8217;t grow as strong under pressure but are also less prone to complete disaster. In that case, we may have to be content with such a robust system, even if it is not antifragile.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-32" href="#footnote-anchor-32" class="footnote-number" contenteditable="false" target="_self">32</a><div class="footnote-content"><p>For instance, in <a href="https://ourworldindata.org/extreme-poverty">Global Extreme Poverty</a>, Max Roser compares absolute numbers of poverty between the early 1980s and 2015: From 2 billion people, they dropped to 735 million people living in extreme poverty.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-33" href="#footnote-anchor-33" class="footnote-number" contenteditable="false" target="_self">33</a><div class="footnote-content"><p><a href="https://ourworldindata.org/life-expectancy#how-did-life-expectancy-change-over-time">How Did Life Expectancy Change over Time?</a> by Max Roser. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-34" href="#footnote-anchor-34" class="footnote-number" contenteditable="false" target="_self">34</a><div class="footnote-content"><p>See <a href="https://wonderingmaps.com/global-literacy-by-age/">Global Literacy by Age</a> and Steven Pinker&#8217;s <a href="https://stevenpinker.com/publications/enlightenment-now-case-reason-science-humanism-and-progress">Enlightenment Now</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-35" href="#footnote-anchor-35" class="footnote-number" contenteditable="false" target="_self">35</a><div class="footnote-content"><p>See Tyler Cowen&#8217;s <a href="https://tylercowen.com/">Stubborn Attachments</a> and Matt Ridley&#8217;s <a href="http://www.rationaloptimist.com/">Rational Optimist</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-36" href="#footnote-anchor-36" class="footnote-number" contenteditable="false" target="_self">36</a><div class="footnote-content"><p>In <a href="https://tylercowen.com/the-great-stagnation-how-america-ate-all-the-low-hanging-fruit-of-modern-history-got-sick-and-will-eventually-feel-better/">The Great Stagnation</a>,<em> </em>Tyler Cowen suggests progress is unevenly distributed over time. While it has been in a slow period of late, we should do our best to help new technological developments to speed it back up. See Patrick Collinson and Michael Nielsen&#8217;s <a href="https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/">Science is Getting Less for Its Buck</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-37" href="#footnote-anchor-37" class="footnote-number" contenteditable="false" target="_self">37</a><div class="footnote-content"><p>&nbsp;In <a href="https://www.mercatus.org/system/files/helland-tabarrok_why-are-the-prices-so-damn-high_v1.pdf">Why Are the Prices so Damn High?</a>, Tabarrok shows that rising services prices that contrast with falling goods&#8217; prices can be explained by it being hard to further increase productivity in services. At least until AI and robotics revolutionize services, similar to how the Industrial Revolution revolutionized goods.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-38" href="#footnote-anchor-38" class="footnote-number" contenteditable="false" target="_self">38</a><div class="footnote-content"><p>Max Roser&#8217;s <a href="https://ourworldindata.org/air-pollution-does-it-get-worse-before-it-gets-better">Air Pollution: Does it Get Worse Before It Gets Better?</a> and Andrew McAfee&#8217;s <a href="https://andrewmcafee.org/more-from-less/overivew">More from Less</a> plot the progress we&#8217;re making on pollution. <a href="https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions#co2-emissions-and-prosperity">CO2 and other Greenhouse Gas Emissions</a> hints at the remaining problems.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-39" href="#footnote-anchor-39" class="footnote-number" contenteditable="false" target="_self">39</a><div class="footnote-content"><p>In <a href="https://resources.giantoak.com/the-economics-of-violence">Economics of Violence</a>, Gary Shiffman suggests that &#8220;<em>access to increasingly larger markets, facilitated through information technology, the World Wide Web, and social media, creates more transnational opportunities for deception, coercion, and violence</em>.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-40" href="#footnote-anchor-40" class="footnote-number" contenteditable="false" target="_self">40</a><div class="footnote-content"><p><a href="https://glocalities.com/latest/reports/towards-2030-without-poverty?chronoform=Whitepaper-Poverty&amp;event=submi">Towards 2030 Without Poverty</a> by See Martijn Lampert.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-41" href="#footnote-anchor-41" class="footnote-number" contenteditable="false" target="_self">41</a><div class="footnote-content"><p><a href="https://tylercowen.com/">Stubborn Attachments</a> by Tyler Cowen.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-42" href="#footnote-anchor-42" class="footnote-number" contenteditable="false" target="_self">42</a><div class="footnote-content"><p><a href="https://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Meditations on Moloch</a> by Scott Alexander. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-43" href="#footnote-anchor-43" class="footnote-number" contenteditable="false" target="_self">43</a><div class="footnote-content"><p><a href="https://agoric.com/assets/pdf/papers/comparative-ecology-a-computational-perspective.pdf">Comparative Ecology</a> by Eric Drexler&#8217;s and Mark S. Miller <em>.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-44" href="#footnote-anchor-44" class="footnote-number" contenteditable="false" target="_self">44</a><div class="footnote-content"><p><a href="https://ourworldindata.org/history-of-our-world-in-data">Our World in Data</a> by Max Roser. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-45" href="#footnote-anchor-45" class="footnote-number" contenteditable="false" target="_self">45</a><div class="footnote-content"><p>This positive development is larger than economic growth alone and should be attractive to other popular approaches for evaluating progress. In <a href="https://punarjitroyc.weebly.com/uploads/4/6/3/3/46337267/sen_1990.pdf">Development as Capability Expansion</a>, Nobel Laureate Amartya Sen introduces capability methods for evaluating human development. These are not tied to GDP alone but measure how many valuable opportunities a person is capable to take advantage of in their environment. His priority capabilities include literacy, health, and political freedom, all of which are on the rise.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-46" href="#footnote-anchor-46" class="footnote-number" contenteditable="false" target="_self">46</a><div class="footnote-content"><p><a href="https://agoric.com/papers/markets-and-computation-agoric-open-systems/abstract/">Markets and Computation: Agoric Open Systems</a> by Mark S. Miller and Eric Drexler.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-47" href="#footnote-anchor-47" class="footnote-number" contenteditable="false" target="_self">47</a><div class="footnote-content"><p>It is widely understood that in the absence of money, trade would be limited by the double coincidence:&nbsp; For a barter to happen, I must have something that you want and you must have something I want. But money by itself doesn&#8217;t solve the problem. Perhaps the package delivery clerk happens to have a car he wants to sell and I happen to need a car just like it and am willing to pay just the right amount. We will never discover this because I am there to ship a package and we never get into that conversation. The collection of things we have and want is so complex that we can&#8217;t talk to every agent that we encounter about what we can do for each other. Institutions help me, with all my specialized knowledge about what I want, to find others wanting complementary things.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-48" href="#footnote-anchor-48" class="footnote-number" contenteditable="false" target="_self">48</a><div class="footnote-content"><p><a href="http://www.erights.org/talks/categories/index.html">Institutions as Abstraction Boundaries</a> by Bill Tulloh and Mark S. Miller.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-49" href="#footnote-anchor-49" class="footnote-number" contenteditable="false" target="_self">49</a><div class="footnote-content"><p><a href="https://agoric.com/papers/markets-and-computation-agoric-open-systems/abstract/">Markets and Computation: Agoric Open Systems</a> by Mark S. Miller and Eric Drexler.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-50" href="#footnote-anchor-50" class="footnote-number" contenteditable="false" target="_self">50</a><div class="footnote-content"><p><a href="https://www.econlib.org/library/Essays/rdPncl.html?chapter_num=2#book-reader">I, Pencil</a> by Leonard E. Read.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-51" href="#footnote-anchor-51" class="footnote-number" contenteditable="false" target="_self">51</a><div class="footnote-content"><p><a href="https://www.iea.org.uk/sites/default/files/publications/files/Hayek's%20Constitution%20of%20Liberty.pdf">The Constitution of Liberty</a> by Friedrich Hayek.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-52" href="#footnote-anchor-52" class="footnote-number" contenteditable="false" target="_self">52</a><div class="footnote-content"><p><a href="https://archive.eiffel.com/doc/manuals/technology/bmarticles/sd/contracts.html">Contracts for Components</a> by Bertrand Mayer.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-53" href="#footnote-anchor-53" class="footnote-number" contenteditable="false" target="_self">53</a><div class="footnote-content"><p><a href="http://www.vpri.org/pdf/hc_user_interface.pdf">User Interface: A Personal View</a> by Alan Kay.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-54" href="#footnote-anchor-54" class="footnote-number" contenteditable="false" target="_self">54</a><div class="footnote-content"><p><a href="https://www.aaai.org/Papers/Workshops/2008/WS-08-14/WS08-14-003.pdf">Development of Logic Programming</a> by Carl Hewitt.</p></div></div>]]></content:encoded></item><item><title><![CDATA[3. MEET THE PLAYERS | Value Diversity ]]></title><description><![CDATA[Last Chapter: OVERVIEW | What to Expect From This Game]]></description><link>https://foresightinstitute.substack.com/p/meet-the-players</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/meet-the-players</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:23:10 GMT</pubDate><enclosure url="https://cdn.substack.com/image/youtube/w_728,c_limit/2HN_xz_-pfg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4>Last Chapter: <a href="https://foresightinstitute.substack.com/p/overview">OVERVIEW | What to Expect From This Game</a></h4><p><br>So far we suggested that civilization is an inherited game shaped by those before us, and we must choose among possible games to pass on to our future selves and generations. In this chapter, we explore the question of what constitutes good play. It is a more philosophical chapter than the remainder of the book, but we hope it allows you to appreciate why we suggest voluntary cooperation as a strategy to play this game.&nbsp;</p><h1>Improve Your Game: Striving for Coherence, Built to Be Conflicted</h1><p>What does it mean to play our civilization game well? A scroll through Twitter shows answers differ widely. No wonder, according to a social intuitionist model for human values. Our actions are mostly based on intuitions, such as a sudden revulsion at an action. Only when prompted to explain them do we invent rationalizations for them. When it comes to reasoned judgment about right and wrong, we resemble<em> &#8220;a lawyer trying to build a case rather than a judge searching for the truth.</em>&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Our intuitions are shaped by factors outside of our control. Evolution built us to care about things because that caring structure had survival benefits. So here we are, creatures that care about things and that have ethical reactions to each other. We are also creatures that try to abstractly theorize about those reactions. Our intuitions as to what is better and worse often conflict, and so do our theories. How do we go forward from here?</p><p>Perhaps the tendency to create abstract rationalizations about our intuitions isn&#8217;t all that bad. We are endowed with a caring structure but also with the ability to reflect on it. At least from an individual problem-solving perspective, this ability can help us create an overall narrative aligning our various wants now with those of our future selves. We react to situations and others as infants, before we can reason about our reactions. But as we wonder about the reasons for these reactions, gain more experience, and live through times of conflict and growth, many of our intuitions will come up for revision.</p><p>We could think of ethics as our internal negotiation between which intuitions we want to hold onto as values and which we want to dismiss as biases. We revise what turns out to be futile or incompatible with the rest. In this process, nothing is sacred; our intuitions and even our beliefs change with multiple consistent equilibria among them. Nevertheless, with continuous adjustment of our default caring structure, the emerging set may help us move more towards who we want to be.</p><h2>The Reflective Equilibrium: Coherence and Conflict&nbsp;</h2><p>While some of our reflection occurs rather opaquely to us, explicit models can sometimes help us gain insight and make better choices. One such model is John Rawls&#8217; <em>Reflective Equilibrium</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>It has three steps:&nbsp;</p><ol><li><p><strong>Intuitions</strong>: Explore your intuitions for a number of situations.</p></li><li><p><strong>Abstraction</strong>: Abstract provisional rules of thumb that account for those intuitions.&nbsp;</p></li><li><p><strong>Reflection:</strong> When these abstractions don&#8217;t reliably recommend actions you find intuitive or when you encounter objections to them, revise either your intuitions or abstractions until you achieve a new equilibrium.&nbsp;</p></li></ol><p>Let&#8217;s see this process at work using the Trolley Dilemma to illustrate our striving for coherence, and superlongevity to illuminate our internal conflict.</p><h3>Striving for Coherence: The Trolley Dilemma</h3><p>Imagine you see a trolleybus heading in your direction. The driver is slumped over the controls, unconscious. In its path on the tracks in front are five people chatting, oblivious, soon to be mowed down. They are all going to die, but you can save them. You&#8217;re standing by the switch and if you simply pull a lever, you can divert the out-of-control trolleybus onto a different set of tracks where only one person is standing.</p><p>There is no time to warn anyone. Do you act and save five lives at the cost of only one? If your initial instincts are that the right answer is yes, you may want to think again.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>&nbsp; Imagine that, instead of pulling a lever, you have to push a bystander in front of the trolley to stop it from hitting the five people. Would you push? Using the Reflective Equilibrium, you may reason as follows:</p><ol><li><p><strong>Intuitions</strong>: I feel like redirecting the trolley with the switch, even if it kills one and saves five, is okay but pushing the man into the threat would be wrong.</p></li><li><p><strong>Abstraction: </strong>I can explain my willingness to switch with a utilitarian value theory. According to act utilitarianism, I ought to choose the action that maximizes a utility measure, such as happiness. All else equal, more lives saved means more utility. My aversion to pushing the man is well encapsulated with a deontological rule, such as the Kantian categorical imperative, which prohibits actions that fail to respect the individual&#8217;s autonomy. The pushed person would be used as a clear means to stop the threat, which fails to respect his autonomy and is thus impermissible.</p></li><li><p><strong>Reflection:</strong> So far, so simple. But as I dive into the vast literature on the Trolley Dilemma, the following inner Socratic dialogue ensues:</p></li></ol><ul><li><p><strong>Pro Utilitarianism:</strong> Some neuroscientists object that my application of the Kantian rule in the push case is actually just a post-rationalization of a dubious intuition. The physical act of pushing the man may activate an emotive cue against up-front personal harm which evolved in past small hunter gatherer groups and is not triggered by seeing a switch. Experiments show that an aversion to pushing was linked to brain areas associated with emotional processing while a willingness to switch was linked to brain areas associated with rational decision-making. Emotionally-laden cues should be morally irrelevant if I could rationally do more good otherwise.&nbsp; Ergo: I should revise my intuition against pushing the man in favor of utilitarian answers to both cases.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li><li><p><strong>Pro Deontology:</strong> Hold on! Perhaps those premises are shakier than I think. Some psychologists suggest that emotions have potential hidden value for rational decision-making.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Somatic markers&#8212;emotional reactions with a somatic component based upon previous similar experiences&#8212;can serve as action heuristics that prescreen choice options, so I might act in time to save the five people without taking the time to have this internal dialogue. My aversion against pushing may be an intuitive application of such a heuristic that encapsulates valuable information. For instance, having the reliable expectation that in my society we don&#8217;t push people into threats could be a good heuristic to move me closer to the trusting society I want to live in.&nbsp;&nbsp;</p></li><li><p><strong>Pro Deontology as Rule Utilitarianism: </strong>Not so fast! If I praise deontology for its value as a heuristic rather than for its intrinsic rightness, am I actually a rule-utilitarian who merely uses the maxim as a short-cut for maximizing overall utility? For instance, Scott Alexander suggests that &#8220;when Kant says not to act on maxims that would be self-defeating if universalized, what he means is &#8216;don&#8217;t do things that undermine the possibility to offer positive-sum bargains.&#8217;&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Instead of calculating the utility for every action, perhaps I follow Kantian heuristics as a rule to maximize total long-term utility?</p></li><li><p><strong>Pro Deontology as Cooperative Strategy:</strong> Not necessarily! That I can sometimes be modeled according to a utility-maximizing agent in iterative game play, doesn&#8217;t automatically make me an aggregative utilitarian. It is tempting to jump from the homo economicus of game theory to anthropomorphizing society as an agent whose utility can be maximized. But this would be a fallacy. Extrapolating from individuals to an overall mystical body, composed of our preferences having a utility, ignores the separateness of persons. When I object to pushing people into threats, it is not necessarily because it serves the greater good long-term but because not upsetting the expectation of reciprocal altruism gives me a society in which I can better achieve my own goals. What does this mean for my Trolley answers?</p></li><li><p>&#8230; The Socratic dialogue continues &#8230;&nbsp;</p></li></ul><p>Whether or not you find this particular dialogue convincing, it is one example of what it could look like to seek a more coherent caring structure.</p><h3>Built to be Conflicted: Superlongevity</h3><p>We may never fully &#8220;solve&#8221; our internal conflicts, but that may be just as well. Marvin Minsky suggested that, while we think of humans as entities with single agency, our minds are built to be conflicted. According to his multiple self view, our minds consist of many internal agents, each having simpler preferences. Our adaptive intelligence arises from these agents keeping each other in check.&nbsp;</p><p>Robin Hanson observes that &#8220;<em>if your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn&#8217;t turn to suicide.&#8221; </em>Thanks to our internal division, even if parts of our mind went suicidal at times, the others are there to keep them in check. So the next time we beat ourselves up when part of us wants this, part of us wants that, we may take solace that having some internal conflict, rather than perfect alignment, may be more a feature than a bug.&nbsp;</p><p>Having trained using the notorious Trolley Dilemma, you are ready for a more radical thought experiment: superlongevity. Those wanting a very long life with extreme personal growth may have to re-evaluate what it means to be alive. Your reflection on the desire for growth versus identity continuity might go like this:</p><ol><li><p><strong>Intuitions: </strong>If faced with the possibility of a very long life, I would be excited to grow into an incomprehensibly larger cognition than what I am now. I care about my future cognition pursuing a great variety of goals, creating greater adaptive complexity, problem-solving ability, and intelligence. But I would only want to pursue growth if it is actually me who is growing. What I care about now has changed a lot since my childhood. If I compare my far-future self to my current self, it is hard to imagine it will care about the same things. Becoming something incomprehensibly great is also incomprehensibly different. Is this still me?&nbsp;</p></li><li><p><strong>Abstraction: </strong>I need a theory of what it means to be me. The most obvious one is "similarity identity": maintaining similarity of aspects at the core of my current identity.&nbsp;</p></li><li><p><strong>Reflection</strong>: The theory of similarity identity clearly conflicts with my desire to grow. Factors at the core of my current identity may change. If I want to hold onto my desire for growth, I need to revisit what it means to be myself.&nbsp;</p></li></ol><ul><li><p><strong>Against Similarity Identity:</strong> Choosing similarity as a defining factor of identity may be a trap. That choice developed unchallenged by new technological possibilities. My caring to stay alive is the result of an evolutionary process that offered far fewer choices and philosophical challenges. To my ancestors, what staying alive meant was pretty unambiguous. It was continuing my corporeal body&#8217;s future timeline. While there were many choices as to what to do to stay alive, there was an unambiguous interpretation of what it meant to be alive. I am now in a position to rethink my drive to stay alive, so it is better co-adapted to my other desires, such as personal growth.&nbsp;</p></li><li><p><strong>Toward Credit Theory of Identity:</strong> Can I come up with a new theory of identity that feels intuitive? How about a &#8220;credit theory of identity&#8221;: It is unclear to what extent Ancient Greek civilization died and I am part of a new civilization or to what extent I am still part of the Ancient Greek civilization. Civilizations are mushy and without clear boundaries. Similarly, once the intimacy of knowledge between brains can be close to the intimacy of knowledge within a brain, personal identities may be more fluid.&nbsp; I can say the Greek civilization is still alive in me, in that I think back on it fondly and credit it for having been a crucial aspect of what I grew up to become. Projecting forward, I can similarly imagine an incomprehensibly greater future version of myself who looks back on me fondly as the beginning that grew into it. In line with my new theory of identity, I seek to transform my desire for survival into a desire to grow into something so great that I will value its fond memory of having been me. Evolution created me with a drive for survival, for the obvious selective reasons. "similarity" and "credit" are two alternative interpretive choices that I can then make. Though one is more obvious, neither is a more true interpretation of survival/longevity than the other. It is up to me to shape my interpretation. I choose the credit theory, to shape my overall caring structure to be more internally aligned.</p></li><li><p><strong>Pro Conflict: </strong>Wait, am I sure that, if technologically possible, I really want to intervene in my mind and &#8220;repair&#8221; parts that conflict with others? I should probably remember Minsky&#8217;s warning; when some parts of my mind remove conflicting parts, I risk destroying the richness from my mind&#8217;s natural conflict.&nbsp;</p></li><li><p><strong>Conflict as Coherence:</strong> Wait, have I just eliminated potentially such valuable internal conflict myself by reaching coherence on the fact that I want to remain conflicted?&nbsp;</p></li><li><p>&#8230; The Socratic dialogue continues &#8230;&nbsp;</p></li></ul><p>Again, whether or not you agree with the details, this is one way of updating a caring structure.&nbsp;</p><h1>Play With Others: Epistemic Humility, Open Minds</h1><p>We just saw that, as individuals, we often have little insight into our caring structure. Even if we manage to make progress towards a more coherent whole, we should expect residual internal conflict. Currently, 7 billion other players are already playing the game of civilization. They all start with different ethical intuitions and abstractions translating into different caring structures. With imperfect insight and conflict regarding our own caring structures, epistemic humility is advised when addressing theirs.&nbsp;</p><p>Currently, other humans are still similar enough that we can sometimes model and shape them. Many of our socially rich interactions rely heavily on this cognitive similarity: We react to people and can use our reactions to them to model their reactions to us. By predicting how others will judge us, we learn to judge ourselves. Adam Smith calls this model the <em>impartial spectator,</em> which continually asks: &#8220;if I were in your shoes, seeing me doing what I'm doing, how would I react to me?&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>&nbsp;</p><p>According to Vernon Smith, this impartial spectator is a good shortcut to explain rich human cooperative behaviors that are anomalies with respect to the game theory of that particular interaction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> We don&#8217;t always cheat, even if we could, and we sometimes punish cheating to no personal benefit. For instance, experiments by Dan Ariely suggest our dishonesty budget when no one is looking is determined by how much dishonesty we can allow ourselves while still maintaining a basically honest self-image.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> That&#8217;s because rather than wanting to act so as to gain praise and avoid blame, our impartial observer makes us want to act in a praiseworthy manner, even if no one sees it.&nbsp;</p><p>Knowing about each others&#8217; impartial spectator, and how it is shaped by others' reactions, generates a rich social account of values: We not only react to the culture around us, but influence everyone else's values by providing an inspirational example, and by praising and supporting projects and people we admire. All of us, by leading what we think of as good lives, help to form the overall evolution of values that will outlive us.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>This is only possible as long as we are similar enough. Just as our ancestors would be shocked by our levels of tolerance, our descendants&#8217; worlds will seem very strange indeed to us and even their fellows.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> According to Robin Hanson, rates of social change have sped up with increased growth, competition, and technological change, so we should also expect accelerating value drift over time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> With people living longer lives, our descendants may increasingly live with more fellow descendants that are very different to them. As the menu of environments to explore, experiences to have, and biological changes to make expands, our ethical intuitions and abstractions may increasingly diverge. The more diverse our civilization, the less we may be able to meaningfully model others and their reactions to us.</p><h3>Why Value Diversity is Here to Stay</h3><p>Is so much epistemic humility really necessary for interpersonal approaches to values? Earlier, we suggested that individuals can personally strive toward more coherence across their ethical intuitions and abstractions. Surely there must be a few core intuitions or abstractions that we can all agree to? Let&#8217;s see why we shouldn&#8217;t count on it.&nbsp;</p><h4>Abstractions: The Moral Philosophy Trenches</h4><p>The long history of rigorous ethical debate invites skepticism for soon converging on one value theory. Kant&#8217;s Categorical Imperative and act utilitarianism will serve as placeholders for a complex philosophical theory landscape.&nbsp;</p><p>Kantians&#8217; disagreement on how to interpret the Categorical Imperative is well illustrated in the case of lying. Kant recommends that to determine if an action is permissible, formulate it into a maxim. Then check if the maxim treats humanity as an end in itself and could apply equally to every person without being self-defeating<em>. </em>Kant thinks lying is prohibited, because it robs us of the autonomy to rationally decide and because, if everyone lied, no one would believe anyone. In short, the maxim &#8220;telling a lie&#8221; both fails to treat humans as an end in itself and fails to be universalizable.</p><p>The Categorical Imperative as a method to generate rules is often praised for better handling rule-ambiguity than other rule-based value theories.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> Nevertheless, while some Kantians side with Kant that the maxim to be universalized is &#8220;telling a lie&#8221;, others disagree. They suggest more fine-grained maxims such as &#8220;lying in situation x&#8221; to allow lying to save an innocent friend from murder. Different people have different intuitions as to which situational aspects are morally relevant for rule construction.</p><p>Similar disagreements hold across utilitarians. They tend to agree that actions ought to maximize aggregate utility but disagree on how, when, and for whom to calculate &#8216;utilities&#8217;. Do non-human animals count? What about future, as yet unborn, generations? How much do they count?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> Are we trying to maximize pleasure? Or just minimize suffering? Should we cast a wider net instead to consider happiness, preference-satisfaction, virtue, or other definitions of the good life?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>&nbsp;</p><p>Depending on one&#8217;s answers, one may soon run into Robert Nozick&#8217;s utility monsters; people &#8220;<em>who get enormously greater sums of utility from any sacrifice of others than these others lose.</em>&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> Alternatively, Derek Parfit&#8217;s mere addition paradox awaits, in which a large population with low positive utility may be better than fewer happy people as long as the final utility comes out higher.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> Each specification of utilitarian theories comes with different costs that different people with different intuitions will trade off differently. These cherry-picked disagreements ignore disagreements <em>across</em> Kantians and Utilitarians. They don't even begin to address other value theories.&nbsp;</p><p>If we can&#8217;t agree on the same value theories, can we at least agree on a few general principles to govern our collective lives that we endorse for different reasons? This may only postpone the problem such that we end up disagreeing on theories to handle value theories disagreement. For instance, John Rawls proposes there are some principles we should all consent to from behind a Veil of Ignorance. Such a veil would prevent humans <em>&#8220;from knowing their own particular moral beliefs or the position they will occupy in society&#8221;.</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a><em> </em>But getting global consensus on which factors one can take behind the Veil of Ignorance is tricky. Do you know when you live? Which species you are? Strip too much of your caring structure away and you can&#8217;t say what you want. Strip away too little, and you get a highly individualized answer. Different people have different intuitions as to what can and cannot be ignored.</p><p>With so much room for interpretation within theories, we should rely even less on picking just one theory to guide our civilizational game. The problem is not that there is no reasonable choice, but that there are more than one. A preference for one theory, or even an interpretation, depends on the same idiosyncratic factors that led to our differences in the first place.&nbsp;</p><h4>Intuitions: Evolved to Value Differently</h4><p>One may hope that even if we cannot agree on the abstraction level, our intuitions should track the same underlying foundations. After all, humans share an evolutionary context and a similar social environment. For instance, Scott Alexander notes that in a society we all need to &#8220;<em>decide how to act, what to do, what behaviors to incentivize, what behaviors to punish, what signals to send.</em>&#8221; So even if our intuitions &#8220;crystallize&#8221; differently on the surface, can we reverse engineer them to a common human morality?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a></p><p>To see how tricky it is to distill a meaningful common core from different individual intuitions, let&#8217;s revisit the Trolley dilemma. Suppose you revise your intuition against pushing the man into the threat because you now think such evolutionary-influenced intuitions should be irrelevant if you can otherwise benefit more people. Then suppose you learn that you prefer your family over creatures removed in space and time because this was evolutionarily adaptive. You decide to stop caring for your family more than strangers.</p><p>Can you convince others to follow you and flatten their caring structure? They may think caring structures evolved to guide action, and it is unhelpful to care about entities we cannot help. For those who continue with a caring structure after reflecting on it, that may be part of the valuing core. As long as you hold onto <em>some</em> evolutionary intuitions as values, you may want to think twice before judging those who make different choices.</p><p>If you learn that human love for nature evolved as a mere heuristic for finding food and water, are you prepared to cut down all forests to replace them with utility-maximizing objects?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a> Different creatures come to different decisions about what they regard as bias and what, upon reflection, they hold onto as value. Ultimately, even &#8220;utilitarian&#8221; intuitions may be debunked as shortcuts for having other people help us when needed, i.e. for reciprocal altruism.</p><p>Can we at least agree on some generic strategies such as reciprocal altruism?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> Marco Del Giudice suggests that from a comfortable environment, understanding the advantages of pro-social trusting strategies is easy because we expect to play many rounds of games with each other. In a much harsher environment, a more short-term survival strategy might be more adaptive because there may not be time to benefit from cooperation in future games.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> Should you convince those for whom cooperating gets them killed to switch strategies? How?</p><p>There may be nowhere outside of humans&#8217; individual idiosyncratic circumstances from which we could recommend one universal strategy. Even if there was, it would be very difficult to reliably communicate this to others who are the product of their own environments. Without much hope to reach interpersonal value agreement, either on the level of abstractions or on the level of underlying intuitions, value diversity is here to stay.&nbsp;</p><h1>The Future: From Value Diversity to Value Drift</h1><p>As we get more diverse, the Silver Rule may provide a good practical heuristic for our interactions for a while. If the Golden Rule is &#8220;Do unto others as you would have them do unto you", the Silver Rule is "Don't do unto others as you would have them not do unto you." Currently, our models of what we want done to us may sometimes still work to figure out what others would like to have done unto them. In an increasingly diverse world, the Silver Rule is more appropriate as an epistemically humble heuristic. Figuring out how to avoid harming others is difficult enough without also trying to actively act on their behalf.</p><p>Heuristics can be useful but if we want a robust civilizational architecture across the next few rounds of play, we need more reliable frameworks for engagement with different cognitive architectures. Even if we don&#8217;t have to worry about meeting alien minds tomorrow, we are actively creating mind-architectures very different from us.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a> Robin Hanson explores a potential future economy in which humans create human-brain emulations<em>.</em> These have reduced inclinations for art, sex, and parenting, can change speeds by changing hardware, and create temporary copies of themselves.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a>&nbsp;</p><p>This scenario at least assumes humans as a source, but we are also making remarkable progress via software and hardware in AIs that function nothing like the human brain. While current &#8220;neural-network&#8221; architectures at best crudely mirror some of our brain&#8217;s functionality, it is naive to suppose they will mirror human caring structures for long.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a></p><p>Future minds may not exhibit much of what we call &#8220;values&#8221; at all, but they could be better characterized as &#8220;goal-seeking&#8221; entities. Nevertheless, as long as they have goals and act as though they make choices, they will have revealed preferences. With less and less comprehension of other players' values, those revealed preferences may be all we have when designing systems for different players to reach their goals. But they may also be all we need to up a playing field that allows for good games, as judged by each player.&nbsp;How? Find out in the next chapter.</p><div id="youtube2-2HN_xz_-pfg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;2HN_xz_-pfg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/2HN_xz_-pfg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Watch <a href="https://foresight.org/salon/robin-hanson-george-mason-university-value-drift-mark-s-miller-toward-paretotopia/">Robin Hanson discuss Value Drift followed up by Mark Miller on Voluntary Cooperation strategies</a>.<br></p><h3>Next chapter: <a href="https://foresightinstitute.substack.com/p/skim-the-manual">SKIM THE MANUAL | Intelligent Voluntary Cooperation</a></h3><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Jonathan Haidt, &#8220;The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.,&#8221; <em>Psychological Review</em> 108, no. 4 (2001): pp. 814-834, https://doi.org/10.1037/0033-295x.108.4.814, 814.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;John Rawls, <em>A Theory of Justice</em> (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 1999).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Judith Jarvis Thomson, &#8220;Killing, Letting Die, and the Trolley Problem,&#8221; <em>Monist</em> 59, no. 2 (1976): pp. 204-217, <a href="https://doi.org/10.5840/monist197659224">https://doi.org/10.5840/monist197659224</a>. Judith Thomson popularized this ingenious philosophical concundrum, in the 1976 article, elaborating on the work of English philosopher Philippa Foot in &#8220;The Problem of Abortion and the Doctrine of the Double Effect.&#8221;&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Joshua D. Greene et al., &#8220;The Neural Bases of Cognitive Conflict and Control in Moral Judgment,&#8221; <em>Neuron</em> 44, no. 2 (2004): pp. 389-400, <a href="https://doi.org/10.1016/j.neuron.2004.09.027">https://doi.org/10.1016/j.neuron.2004.09.027 </a>In this, participants solved the Trolley Problems while in fMRI brain scanners, which showed an aversion to pushing being associated with amygdala activation in the brain.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Antonio R. Damasio, <em>Looking for Spinoza: Joy, Sorrow and the Feeling Brain</em> (London: Vintage, 2004).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Scott Alexander, &#8220;You Kant Dismiss Universalizability,&#8221; Slate Star Codex, July 22, 2020, <a href="https://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/">https://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/</a>. Scott Alexander suggests that when Kant says acting on maxims would be self-defeating if universalized, he essentially means &#8216;do not do things that undermine the possibility to offer positive sum bargains.&#8217;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Adam Smith and Edwin George West, <em>The Theory of Moral Sentiment</em> (New Rochelle (New York): Arlington House, 1969), 19.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Vernon Smith and James Otteson, &#8220;Will the Real Adam Smith Please Stand Up?&#8221; recorded 2015, on EconTalk, accessed 2022, https://www.econtalk.org/vernon-smith-and-james-otteson-on-adam-smith/</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Dan Ariely,<em> The (Honest) Truth About Dishonesty</em> (New York: Harper Collins, 2012).&nbsp; Daniel Ariely presents behavioral probes of honesty levels in his book by tempting individuals to be dishonest for personal gain when they are observed versus when they think they are unobserved. In both cases, subjects allow themselves a dishonesty budget, which is only slightly larger when they think they are unobserved.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Russell D. Roberts, <em>How Adam Smith Can Change Your Life: An Unexpected Guide to Human Nature and Happiness</em> (New York: Portfolio/Penguin, 2015). Russ Roberts illustrates how this translates into our everyday lives. This social account of value resembles Aristotle&#8217;s <em>Virtue Ethics</em>, which focuses on the individual&#8217;s development of a stable moral character guided by such virtues as prudence, courage, and temperance.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Ted Chiang, &#8220;Catching Crumbs from the Table,&#8221; <em>Nature</em> 405, no. 6786 (2000): pp. 517-517, <a href="https://doi.org/10.1038/35014679">https://doi.org/10.1038/35014679</a>. For a fictitious snippet of how much drift is possible with technology, Chiang&#8217;s article writes of humans developing an embryonic gene therapy that forks their descendants into humans and metahumans. They have little in common apart from technologies passed down from metahumans to humans.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Robin Hanson, &#8220;On Value Drift,&#8221; Overcoming Bias, February 21, 2018, <a href="https://www.overcomingbias.com/2018/02/on-value-drift.html">https://www.overcomingbias.com/2018/02/on-value-drift.html</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Isaac Asimov, <em>I, Robot (</em>Gnome Press, 1950). For an example of conflict in rule-based theories, Isaac Asimov develops four laws of robotics as a story device: 1. Protection of humans, 2. Non-malfeasance to humans, 3. Obedience to human command, and 4. Self-preservation. To avoid conflict when rules recommend different actions, Asimov ranked the rules, so rule 1 trumps rule 2, rule 2 trumps rule 3, and so forth. The rules&#8217; broad formulation still leads to conflicts, such as in <em>Evitable Conflict</em>, where AIs seek to follow the First Law by taking control of humanity; an action that would be impermissible by the Categorical Imperative as it fails to respect human autonomy.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Nick Bostrom &#8220;Astronomical Waste&#8221; 2003, Nickbostrom.com. <a href="https://nickbostrom.com/astronomical/waste.html">https://nickbostrom.com/astronomical/waste.html</a>. Future humans will care about their utility but current humans may not care to the same extent. Nick Bostrom suggests that given a few assumptions about technological progress, energy use, and human-brain emulations, every second we delay colonization of our local supercluster loses about 1029 potential human lives. One may disagree on the accuracy of this estimate, but our potential future&#8217;s vastness makes trading off utility of current humans with that of future humans difficult.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Nick Bostrom &#8220;Infinite Ethics&#8221; <em>Analysis and Metaphysics</em>, Vol 10 (2011): pp. 9-59. Amanda Askell &#8220;Pareto Principles in Infinite Ethics&#8221; a dissertation for the Doctor of Philosophy, NYU (2018): <a href="https://askell.io/files/Askell-PhD-Thesis.pdf">https://askell.io/files/Askell-PhD-Thesis.pdf</a>. Bostrom and Askell eloquently show how an &#8220;infinite number of sad and happy people&#8221; might pose problems for aggregative theories (and other moral theories). Anders Sandberg, David Manheim &#8220;What is the Upper Limit of Value?&#8221; Philpapers.org (2021). <a href="https://philpapers.org/archive/MANWIT-6.pdf">https://philpapers.org/archive/MANWIT-6.pdf</a>. Anders Sandberg and David Manheim argue for the alternative view that &#8220;the morally relevant universe is finite&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Robert Nozick <em>Anarchy, State and Utopia </em>(Basic Books, 1974).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Derek Parfit <em>Reasons and Person</em> (Oxford University Press, 1984).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Iason Gabriel &#8220;Artificial Intelligence, Values, Alignment&#8221; <em>Minds and Machines</em> Vol 30 (2020): pp. 411-437. Gabriel explores the Veil of Ignorance, Human Rights and Democracy as possible strategies for reaching &#8220;global consensus&#8221; on handling reasonable value pluralism. The very fact that there is more than one choice for handling value pluralism, each with room for interpretation, means that preferences for when to use which interpretation can diverge. For instance, that many of us participate in systems that are non-democracies in our everyday lives suggests that there can be reasonable disagreement as to when they are and aren&#8217;t appropriate. We slide from reasonable pluralism of values into reasonable pluralism of theories for handling reasonable value pluralism.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>Scott Alexander, &#8220;Value Differences as Differently Crystallized Metaphysical Heuristics,&#8221; Slate Star Codex, July 22, 2020, https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Scott Alexander, &#8220;Value Differences as Differently Crystallized Metaphysical Heuristics,&#8221; Slate Star Codex, July 22, 2020, https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Greene et al., &#8220;Neural Base, Cognitive Conflict,&#8221;&nbsp; 389.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Marco Del Giudice, <em>Evolutionary Psychopathology: A Unified Approach</em> (New York, NY: Oxford University Press, 2018).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>Robin Hanson &#8220;How Far to Grabby Aliens?&#8221; OvercomingBias.com, December 2021: <a href="https://www.overcomingbias.com/2020/12/how-far-aggressive-aliens.html">https://www.overcomingbias.com/2020/12/how-far-aggressive-aliens.html</a>. Hanson explains why we may expect to meet them in roughly half a billion years. Eliezer Yudkowsky, &#8220;<a href="https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8">Three Worlds Collide</a>&#8221; LessWrong, January 30, 2009: https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8. Yudkowsky explores how such an encounter may further shake our value understanding. The aliens we encounter in his story, while otherwise seemingly benign, eat their own babies because this was evolutionarily adaptive for them. Our moral outrage and high ground only lasts until we encounter another alien civilization to whom human-evolved customs are morally atrocious.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p>Robin Hanson <em>Age of Em</em> (Oxford University Press, 2018).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p>Robin Hanson, &#8220;On Value Drift&#8221; Overcoming Bias, February 21, 2018, <a href="https://www.overcomingbias.com/2018/02/on-value-drift.html">https://www.overcomingbias.com/2018/02/on-value-drift.html</a>.&nbsp;Robin Hanson notes that we may not have to worry much about <em>additional</em> value drift induced by non-em artificial intelligence all too soon because they will take on social roles near humans, and those we once occupied. But then again, Hanson already assumes great value drift across humans.</p></div></div>]]></content:encoded></item><item><title><![CDATA[2. OVERVIEW | What to Expect From This Game]]></title><description><![CDATA[Last chapter: FOREWORD | What&#8217;s at Stake in This Game?]]></description><link>https://foresightinstitute.substack.com/p/overview</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/overview</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:21:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/rTzIop9CstI" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4>Last chapter: <a href="https://foresightinstitute.substack.com/p/foreword">FOREWORD | What&#8217;s at Stake in This Game?</a></h4><h1>Who are you?&nbsp; </h1><p>You are the result of evolutionary games<em>:</em></p><p><em>&#8220;Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stranded, stuck fast, untimely wounded, or otherwise deflected from its life's quest of delivering a tiny charge of genetic material to the right partner at the right moment in order to perpetuate the only possible sequence of hereditary combinations that could result -- eventually, astoundingly, and all too briefly -- in you.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em></p><p>What awaits after a few more iterations of this game?&nbsp; This depends, in part, on your decisions.&nbsp;</p><p>You could make nihilistic outcomes more likely:</p><p><em>"In some remote corner of the universe, poured out and glittering in innumerable solar systems, there once was a star on which clever animals invented knowledge. That was the highest and most mendacious minute of "world history" &#8212; yet only a minute. After nature had drawn a few breaths the star grew cold, and the clever animals had to die.&nbsp; There have been eternities when it did not exist; and when it is done for again, nothing will have happened."&nbsp;</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Or you could make fantastic futures more likely:&nbsp;</p><p><em>&#8220;Whether anyone else is out there or not, we are on our way. [...]&nbsp; Expansion will proceed, if we survive, because we are part of a living system and life tends to spread. Pioneers will move outward into worlds without end. Others will remain behind, building settled cultures throughout the oases of space [...]&nbsp; Where goals change and complexity rules, limits need not bind us. [...] New technologies will nurture new arts, and new arts will bring new standards. The world of brute matter offers room for great but limited growth. The world of mind and pattern, though, holds room for endless evolution and change. The possible seems room enough</em>.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>This book starts with high hopes for the future and the following conundrum: You&#8217;re in this game of civilization with players who have a diversity of values. Some values are complementary, some opposing, and some include the destruction of the playing field. As the game continues, rapid increase in technologies give everyone more power. If you hate civilization, your next move is easy. But if you love this game, and want to see it evolve over many iterations, you have a more complicated set of choices to make.&nbsp;</p><p>Civilization is an inherited game shaped by those before you. If you&#8217;re happy your ancestors didn&#8217;t lock you into a future, should you leave everything up to future players? If only that was an option. Any move you make will affect the choices that future players have available to them.&nbsp;You must play your turn, and therefore, you must choose among games to pass on to future generations. By choosing strategies of <em>intelligent voluntary cooperation</em>, you can set the game of civilization upon a path of rapidly increasing intelligence serving a diversity of goals.</p><p>Let&#8217;s look at a high level overview of what we later cover in depth.</p><h2>Where To Start</h2><p>We begin by showing that goals differ across players of civilization. Each of us has subjective best guesses about the world. We disagree about what future state to move to and about how to get there. Some want to grow a pristine garden; others want to explore new worlds; others want to drive the pursuit of knowledge. These differences will become more apparent as players evolve. Some future players may have artificial minds. What they want may be very alien to human players.</p><p>In light of this ignorance, relying on voluntary interactions across players is a good heuristic to serve different goals. A <em>voluntary</em> action only depends on a player&#8217;s internal logic, leaving them &#8220;free&#8221; to engage or not engage in interactions. We tend to only consent to moves from which we expect benefit rather than harm. Such moves, which make at least one player better off without making anyone worse off, are called <em>Pareto-preferred</em>. As a rule of thumb, voluntary interactions gradually move civilization into <em>Pareto-preferred</em> directions, i.e., directions that tend to be better for everyone by their own standards.</p><p>We show that this principle has a good historic track record. Human civilization is growing less violent over time and many things players care about, from health to education, are improving. Voluntarism enables cooperation but it does not, by itself, bring it about. It took thousands of years of institutional evolution to create complex systems of prices, property, and institutions that help players <em>cooperate</em> better. Instead of arguing about dividing the pie, they get better at growing it.&nbsp;</p><p>As the game continues, it becomes increasingly intelligent. One constraint on civilization&#8217;s intelligence is that each of its players plans mostly in ignorance of others&#8217; plans. Institutions evolve to better coordinate across them by providing signals about what would be beneficial to do. Some players are humans, some are institutions themselves, and an increasing number will be software entities. Composed together by improving networks of voluntary cooperation, they increase the adaptive <em>intelligence</em> of civilization. In the pursuit of their highest values, they unlock new levels across the board.</p><h2>What To Seek More Of</h2><p>Approaching such futures requires progress in technologies of cooperation. Much of players' ability to benefit from each other is still limited. Perhaps the biggest problems arise when there is a state of the world that we would all prefer to jump to, but lack the coordination to do so. A look at how institutions evolved to deal with these factors shows how to further diminish them. With contracts, players can make binding commitments to particular future actions and cooperate for mutual benefit.&nbsp;</p><p>Countless cooperative constellations are possible across the 7 billion players of civilization. We&#8217;ve unlocked many of them but we can do even better. The internet secures the right to information, cryptocurrency grants monetary sovereignty, blockchain makes institutions incorruptible, and smart contracts democratize the right of contract. Cryptography-based commerce, <em>cryptocommerce</em>, can create a base for amplifying and democratizing cooperation.</p><p>Some coordination problems will remain tricky. Drafting mechanisms for 7 billion people to find each other, speed up the bargaining process, and enforce an arrangement is extraordinarily difficult. But it could unlock previously inconceivable layers of civilization. We won&#8217;t jump to this world tomorrow, but we can gradually grow into it.&nbsp;</p><h2>What To Defend Against</h2><p>As civilization evolves, players will unlock new capabilities. Biotechnology will deliver healthier lives, nanotechnology will provide wasteless manufacturing, and AI will accelerate unprecedented discoveries across the board. But the same technologies could be leveraged to cause unprecedented destruction. The economics of fighting wars could lead to pervasive robotic enforcement via lethal autonomous weapons and surveillance.&nbsp;</p><p>When guarding against the downsides of powerful technologies, players must resist the temptations of solutions that create more problems than they solve. Statist solutions that centralize the capacity for violence without checks and balances are such a danger. Checks on U.S. power, the world&#8217;s leading military player, are decreasing, while checks on its rising rival, China, are near absent. Both have access to weapons that could nuke the playing field of civilization.</p><p>Decentralized defense systems that allow for multipolar monitoring and cross-checking can make their own dangers more visible. Cryptography can make them more privacy-preserving. Such systems are hard to envision but will emerge from today&#8217;s game. So called &#8220;black boxes&#8221; already provide indelible records for internal surveillance of automated systems. Smartphone cameras already democratize surveillance, making human enforcement more accountable.</p><p>Any desirable future will rely on computer security at every level, from hardware, to operating systems, to software, all the way to the user interface. Computer security is essential to de-risking cooperation that increasingly takes place virtually. It is also essential for preventing automated weaponizable technologies that make mass-killing trivially easy. Fortunately, there are promising candidates to address the problem; instead of adding security to a system last, they prevent insecurities from the very start. The seL4 microkernel is an excellent example of a provably secure system.&nbsp;</p><p>To make computer security adoptable, we need a mixture of research and entrepreneurship to test it in the real world. The cryptocommerce ecosystem already serves as a test arena, where rogue actors compete to steal cryptocurrencies. It is hostile enough that insecure software dies quickly so that the ecosystem is populated by the survivors. This gives us a better chance at building a full secure software stack from the foundations to the user. Such play-tested systems can grow within, co-exist with, and eventually outcompete current insecurable software infrastructure.</p><h2>What To Hope For</h2><p>As civilization expands, it could act like a seed crystal dropped into a supersaturated solution, expanding its ordering principles in all directions. There is no law saying that the result will be the continually growing spontaneous order intelligence of civilization. It could be the outcome of a winner-takes-all arms race to expand first.&nbsp;</p><p>If human players upgrade their tools to cooperate with artificial players newly arriving on the scene, we have much to look forward to. But even if all goes well, the universe will eventually no longer be able to sustain computation of any sort, especially not the complex computation required for intelligence. Even if we create a game which makes everything up to now seem like an insignificant speck, it is all temporary. Nevertheless, insofar as the in-between is shaped by what current players value, it&#8217;s on us to shape what happens between now and then.</p><p>The game is on and the stakes are high. Let&#8217;s play.</p><div id="youtube2-rTzIop9CstI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;rTzIop9CstI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/rTzIop9CstI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Curious for more? Head to the <a href="https://foresight.org/salon/intelligent-cooperation-intro-mark-s-miller-christine-peterson-allison-duettmann/">Intelligent Cooperation</a> intro seminar.</p><p></p><h4>Next chapter: <a href="https://foresightinstitute.substack.com/p/meet-the-players">MEET THE PLAYERS | Value Diversity</a></h4><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&nbsp;<a href="https://www.goodreads.com/work/quotes/2305997-a-short-history-of-nearly-everything">A Short History of Nearly Everything</a> by Bill Bryson.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;<a href="https://www.kth.se/social/files/5804ca7ff276547f5c83a592/On%20truth%20and%20lie%20in%20an%20extra-moral%20sense.pdf">On Truth and Lies in an Extra Moral Sense</a> by Friedrich Nietzsche.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&nbsp;<a href="https://www.amazon.com/Engines-Creation-Nanotechnology-Library-Science/dp/0385199732">Engines of Creation</a> by K. Eric Drexler.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[1. FOREWORD | What's at Stake in This Game?]]></title><description><![CDATA[Previous chapter: START HERE: The Book]]></description><link>https://foresightinstitute.substack.com/p/foreword</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/foreword</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:21:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ormT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4>Previous chapter: <a href="https://foresightinstitute.substack.com/p/start-here">START HERE: The Book </a></h4><h1>Civilization: A Superintelligence Aligned with Human Interests</h1><p>Consider civilization as a problem-solving superintelligence.&nbsp;</p><p>The graph below shows the global decline in extreme poverty from 1820 to 2015, prompting Steven Pinker&#8217;s quote,</p><p>&#8220;We have been doing something right, and it would be nice to know what, exactly, it is.&#8221; <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>After 1980, the rate of decline increases dramatically and continues.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> It would be good to know the dynamics that culminated in this dramatic decline that continued for the next 40 years. What did we get right? What did civilization learn?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ormT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ormT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 424w, https://substackcdn.com/image/fetch/$s_!ormT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 848w, https://substackcdn.com/image/fetch/$s_!ormT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!ormT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ormT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png" width="1456" height="1028" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1028,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ormT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 424w, https://substackcdn.com/image/fetch/$s_!ormT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 848w, https://substackcdn.com/image/fetch/$s_!ormT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 1272w, https://substackcdn.com/image/fetch/$s_!ormT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fead5b525-a9ce-42f5-9be3-9dafb0f34a96_1600x1130.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s not possible to answer this directly, but we would like to open it up for discussion. We need a conceptual framework for thinking abstractly about civilization over long periods of time.</p><p>We like the game metaphor. Games are universal. Asking someone to imagine a game board gives an intuitive context for explaining the iterative behavior of multiple players and the consequent outcomes.</p><p>Chess has rules. Then play begins. The game is the properties that emerge from the interactions of separately-interested players in a framework of rules.</p><p>What changed from &#8220;nature red in tooth and claw&#8221; to our current civilization? As civilization has grown less violent, it is increasingly dominated by voluntary interactions. The nature of the rules in this less threatening world evolves in a context of willing participation. The emergent outcome of the game is more effective cooperation, creating better outcomes for everybody.&nbsp;</p><p>This is something we got right.</p><h2>Technologies for Voluntary Cooperation</h2><p>Can we game the future such that more of the world can know the power of voluntary cooperation? Can our future display new forms of cooperation that are not possible with our current technologies?</p><p>Luckily, our recent history includes an illustrative case: the timeline of the deployment of modern cryptography.</p><p>Cryptography has a long fascinating history, but here we follow a particular thread beginning in August 1977 with the publication of Martin Gardner&#8217;s &#8220;Mathematical Games&#8221; column in <em>Scientific</em> <em>American</em>. Gardner wrote about a ground-breaking discovery: The RSA (Rivest&#8211;Shamir&#8211;Adleman) public key algorithm.</p><p>Mark Miller (co-author) has a personal history in the battles over cryptography. In 1976, Mark apprenticed himself to Ted Nelson, creator of the Xanadu hypertext system. They shared a vision of a global hypertext publishing system as a censorship-free liberating force. But they knew that, without the right architecture, any such system would be corrupted into a tool of <em>1984</em>-style oppression. Before the invention of modern cryptography, they could not solve this puzzle.</p><p>Mark was an avid reader of <em>Scientific American</em> and always went straight to Gardner&#8217;s column. After reading about the RSA discovery, he called Ted at an ungodly hour and joyously proclaimed, &#8220;Ted! We can prevent the Ministry of Truth!&#8221;&nbsp;</p><p>They immediately set out to get a copy of the RSA publication. In those days you could not simply download scientific publications. Instead, they mailed a request for a copy. And waited. Nothing arrived. The United States intelligence community had suppressed publication, to preserve their ability to spy on all conversations.</p><p>Mark decided to take action. His mission began with a trip to the MIT campus. As a 20-year-old computer geek he was welcome in the tribe and before long, he had the paper in hand. He knew it was critical to distribute it far and wide to counter the attempt at suppression. Fully aware of the risk, he gave copies to his most-trusted friends, telling them &#8220;If I disappear, make sure this gets out.&#8221;</p><p>While handling the paper only with gloves he started making copies, going to several different copy shops. He mailed copies anonymously to technology-minded hobbyist groups and magazines.</p><p>In 1978, U.S. intelligence dropped the attempt to suppress the RSA paper. Mark will never know if his personal actions made any difference. But this experience made the stakes clear. Technologies of freedom are worth fighting for, <em>and building</em>.</p><p>Mathematics and technology do not, on their own, lead to privacy of individuals in their digital lives.</p><p>Together with like-minded cypherpunks, the fighting and building continued. On one side, export controls, mandatory Clipper chip backdoors, and weak cyphers were approved for general use. On the other, Phil Zimmerman&#8217;s PGP, Matt Blaze&#8217;s hacking of Clipper, John Gilmore&#8217;s breaking of government approved cyphers, and the Electronic Frontier Foundation&#8217;s case overturning export controls as unconstitutional violations of free speech. Mostly, we won.</p><p>We can easily imagine an alternate history in which these fights had been lost. Our world would be much more totalitarian. In the analog era, all of our conversations could be spied upon. The digital era would have combined inescapable surveillance with modern computing, leading to an inversion of democracy: Those shielded by classified cryptography are unaccountable to us, while we, in every minute aspect of our lives, are accountable to them.</p><p>While the real world resembles this nightmare to an uncomfortable extent, the public growth of modern cryptography gives us the tools to fight back.</p><p>The HTTPS encryption protocol gives us secure transactions and communication &#8211; secure email, end-to-end encryption, and secure credit card transactions necessary for the growth of a Web economy. Human rights activists are less vulnerable due to secure messaging.</p><p>All over the world, corrupt powers destroy lives. They interfere with individuals&#8217; plans to have a good life through voluntary trade with others. How can you plan with the uncertainty of arbitrary coercive interference? It&#8217;s hard for those in the rich world to appreciate the motivation and inventiveness of people in this situation. Modern cryptography, including Bitcoin, allows for a parallel economy that does not rely on the government for commerce and trade. Some are fighting for their lives and these tools give them a chance.</p><p>Blockchains cannot be corrupted. The decentralized cryptographically secured interactions are protected from coercive corruption by governments or criminals. Transactional history cannot be rewritten. Smart contracts give us a new technological base to create <em>complex</em> voluntary arrangements, to realize new forms of cooperation.</p><p>We are an information society. Starting with these hard-earned gains, we can build a solid foundation crucial to our future, where each layer is built on previous layers in the context of trust and secure cooperation. Although in its infancy, we expect the cryptocommerce space to be transformative.</p><h2>Centralized Power</h2><p>Centralized power &#8212; concentrated sources of authority and control &#8212; is incompatible with a voluntary, cooperative framework of civilization.&nbsp;</p><p>We&#8217;re not advocating for revolutionary takeover or government regulations to combat centralization. First, it&#8217;s often not clear what or how to regulate. Secondly, such tactics often backfire, creating more centralization as a consequence.&nbsp;</p><p>Let&#8217;s take a look at a couple of cases: bank regulations and Google&#8217;s Gmail.</p><p>The banking industry is susceptible to top-down manipulation. Operation Choke Point<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> successfully shut down lawful businesses, denying them access to standard business services. This was not voluntary action by <em>some</em> banks, rather, <em>all</em> banks reacting to regulatory pressure. The targeted businesses had nowhere to turn.</p><p>Google&#8217;s Gmail is an interesting case. Before webmail, we had a world of decentralized email. There were several projects working to add crypto, which would have given us truly decentralized secure email. Instead, centralized webmail took over.</p><p>When first released in 2004 Gmail had 2 million users. Currently, it&#8217;s 1.5 billion. Google stores and has access to the plain text of users&#8217; email. Why were people willing to entrust Google with all their correspondence? One reason was Google&#8217;s slogan at the time, &#8220;<em>Don't be evil&#8221;</em>, projecting an image of trustworthiness.&nbsp;</p><p>Over time, Google accumulated a centralized trove of the contents of email communication of over a billion people. The US federal government, unable to resist the temptation, issued national security letters (1) demanding the handover of private email and (2) prohibiting Google from telling anyone. Whether Google wanted to resist or comply is a separate issue; the point is that a centralized vulnerability inevitably led to its corruption.&nbsp;</p><p>There is a natural dynamic between centralizing and decentralizing forces. As in the Gmail case, at first, we often risk centralized vulnerability for convenience.&nbsp;</p><p>Centralized vulnerabilities create temptations to corruption that cannot be resisted. As the resulting corruption becomes apparent, they raise the competitive advantage of decentralized competitors. We seek to create incorruptible decentralized systems that can outcompete centralized systems.&nbsp;</p><p>By working within the constraints of voluntary competition, we are more protected from our own mistakes. If our dreams are misconceived, they are also less likely to outcompete. If, on the other hand, they are the genuine improvements that we think they are, they are also more likely to win these competitions in the long run.</p><p>The battle for decentralization can never have an ultimate decisive victory, but, in the absence of astute watchfulness, it can suffer an ultimate defeat.</p><h2>Superintelligence</h2><p>There are many views on AI dangers. One prominent perspective goes as follows: Once an AI exceeds human capacity, it can improve its own design much faster than human designers can. As it improves itself, it also improves its ability to improve itself, leading to an explosive chain reaction which can suddenly catapult this one breakthrough AI into a superintelligent capacity exceeding all the rest of human civilization combined.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>This is the &#8220;hard takeoff&#8221; scenario, leading to a &#8220;unipolar&#8221; outcome. It is a hard takeoff because it happens so suddenly that nothing else has a chance to adapt during the process. It is unipolar in the sense that this one superintelligent entity may be more powerful than everything else, and so in a position to rule everything else.</p><p>Starting from the notion that this unipolar takeover is inevitable&#8212;that we will necessarily be ruled by a permanent dictator of our own design&#8212; some conclude we should design a benevolent dictator: one that <em>wants</em> to serve our interests. This would raise two design questions: (1) how to construct the AI so that it wants to serve our interests? and (2) what are our interests anyway? The first question is hard, but the second question opens up philosophical problems that have eluded any general agreement over the last few millennia.</p><p>From our perspective, any best case scenario arising from this notion is a worst case scenario, the one we must prevent at all costs. Any unipolar takeover of the world is unlikely to be benevolent. Rather than hoping that this unprecedented power over the world will be shaped by &#8220;the right kinds of people&#8221;, history tells us that powerful positions attract those who want power.&nbsp;</p><p>Instead of a centrally designed formula encoding the general good, we currently have a diverse pluralistic world of many different people making their own choices about what they want and how to achieve it. People formulate their goals using their idiosyncratic personal knowledge within a great variety of cultural and philosophical systems. There may be no general good beyond the revealed preferences of each of us making choices to pursue our goals in our own way.</p><p>If the problem of superintelligence were a purely new problem with no historical precedent, there is little relevant to learn from history. But human institutions are already non-human intelligences with which we cooperate and against which we defend ourselves. Depending on the nature of the institution, they can be well aligned or badly aligned with human interests.&nbsp;</p><p>Until the last few centuries, most of our history was the history of tyranny. By contrast, with the invention of democracy, separation of powers, rule of law, due process, individual rights, and independent judiciaries, not only have we better aligned our institutions with our interests, we have also enabled the ecosystem of these institutions &#8212; our civilization as a whole &#8212; to rapidly grow in intelligence and benefit to its constituents.&nbsp;</p><p>The superintelligence of civilization is already emergent from the interplay of human and machine intelligences. Within our multipolar civilization, as machines get more intelligent, they will contribute more to the overall intelligence of our civilization.&nbsp;</p><p>We're already facing, and have faced now for over seven decades, the existential risk of nuclear war. We are in a multipolar world of multiple nations armed with nukes and willing to use them if they absolutely have to. Now that we're in that situation, our only options going forward are multipolar options. Anything that threatens a unipolar takeover risks provoking a nuclear war. So unipolar solutions may be off the table anyway.</p><p>We cannot survey the landscape of choices from an imagined position <em>outside the game</em>. One can imagine Karl Marx sitting at his desk, overlooking the factory floor below, appreciating the coordination and efficiency of the workers. <em>It&#8217;s all right there in front of him.</em> He cannot fathom why the economy outside needs more than this elemental structure.</p><p>We are in a different game. The people who can shape the game are in it. We&#8217;re in an iterated game. We are grateful that previous players set it up such that we can now play the next moves from inside the game. Whatever we do, the result determines what kind of game gets played in the future. We cannot <em><strong>not</strong></em> play the game.</p><p>But we <em><strong>can</strong></em> iterate and make sure the game emerges in a multipolar manner within the voluntary framework of civilization. This book explores technologies that can be useful on this path.&nbsp;</p><p>In the future, most cognition will be non-human. The game dynamics we start now must be good enough for human and non-human interests, so that these future players have an interest in upholding the voluntary nature of the game. <em>That is our ultimate protection</em>. If we can accomplish this difficult task, future players can unlock currently unimaginable levels of this beautiful game.</p><h4>Next chapter: <a href="https://foresightinstitute.substack.com/p/overview">OVERVIEW | What to Expect From This Game</a></h4><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&nbsp;<a href="https://www.edge.org/3rd_culture/pinker07/pinker07_index.html">A History Of Violence</a> by Steven Pinker. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Graph taken from <a href="https://ourworldindata.org/grapher/world-population-in-extreme-poverty-absolute">World Population Living in Extreme Poverty</a> from OurWorldinData. A similar graph also exists for what the world looks like with China excluded in <a href="https://ourworldindata.org/grapher/poverty-decline-without-china?country=World+not+including+China~OWID_WRL">Poverty Decline without China.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&nbsp;<a href="https://en.wikipedia.org/wiki/Operation_Choke_Point">Operation Choke Point</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1">&nbsp;Superintelligence</a> by Nick Bostrom.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[0. START HERE: The Book ]]></title><description><![CDATA[The Book Welcome to this living book and book club about technologies for intelligent voluntary cooperation by Allison Duettmann, Mark S. Miller, and Christine Peterson, Foresight Institute. Gaming the Future: Technologies for Intelligent Voluntary Cooperation]]></description><link>https://foresightinstitute.substack.com/p/start-here</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/start-here</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Mon, 14 Feb 2022 18:20:44 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/8c2b1f25-0525-4699-8b72-328a5227031b_3800x1367.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code>The Book </code></pre><h1><strong>Gaming the Future: Technologies for Intelligent Voluntary Cooperation</strong></h1><p>Welcome to this living book and book club about technologies for intelligent voluntary cooperation by <a href="https://foresight.org/our-team/allison-duettmann/">Allison Duettmann</a>, <a href="https://papers.agoric.com/authors/mark-s-miller/">Mark S. Miller</a>, and <a href="https://foresight.org/our-team/christine-peterson/">Christine Peterson</a>, <a href="https://foresight.org/">Foresight Institute</a>.</p><h2><strong>Intro</strong></h2><p>Opportunities for bright futures enabled by bio, nano, and AI are now within our reach. But technological proliferation also brings risks that threaten the very existence of civilization. To help civilization navigate this abyss, this book addresses three questions:</p><p>1. How can we help civilization cooperate better? <br>2. How can we help civilization defend itself better? <br>3. How can we help civilization do both - cooperation and defense - in light of AI?</p><p>Explore strategies, tools, and technologies for enabling voluntary cooperation across a diversity of intelligences. Let&#8217;s unlock Paretotropian futures of high technology in which valuing entities can pursue their highest function through iterative play.</p><h2>Short-cuts to chapters</h2><p>Follow the links at the bottom of each page, or jump around between chapters: </p><ol><li><p><a href="http://foresightinstitute.substack.com/p/foreword">FOREWORD | What's at Stake in This Game?</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/overview">OVERVIEW | What to Expect From This Game</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/meet-the-players">MEET THE PLAYERS | Value Diversity</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/skim-the-manual">SKIM THE MANUAL | Intelligent Voluntary Cooperation &amp; Paretotropism</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/improve-cooperation">IMPROVE COOPERATION | Information, Money, Rights, Contracts, and Privacy</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/genetic-takeover">GENETIC TAKEOVER | Cryptocommerce</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/defend-physical">DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/defend-cyber">DEFEND AGAINST CYBER THREATS | Computer Security</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/new-players">WELCOME NEW PLAYERS | Artificial Intelligences</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/iterate-game">ITERATE THE GAME | Racing Where?</a></p><p></p></li></ol><h2>Thank you</h2><p>We would like to thank members of our <a href="https://foresight.org/intelligent-cooperation-videos/">Foresight&#8217;s Intelligent Cooperation Group</a> for shaping this book into what it is through our 2021 seminars. The seminars are now incorporated into the chapters as deep dive seminars.</p><ul><li><p>Robin Hanson, George Mason University | Value Drift</p></li><li><p>Balaji S. Srinivasan, 1729 | The Network State</p></li><li><p>Vernon Smith, Chapman University | Theory of Price Discovery in Markets&nbsp;</p></li><li><p>Andrew McAfee, MIT | Civilizational Progress&nbsp;</p></li><li><p>Tyler Cowen, George Mason University | Stubborn Attachments</p></li><li><p>Audrey Tang, Taiwan&#8217;s Digital Minister | Tools for Openness</p></li><li><p>Chris Hibbert, Anthony Aguirre, Martin Koeppelmann, Paul Gebheim, and Robin Hanson | Prediction &amp; Replication Markets</p></li><li><p>Christine Lemmer-Webber  | Re-Decentralizing Social Networks</p></li><li><p>Kate Sills, Independent  | NFTs and Engineering Property Rights</p></li><li><p>Arthur Breitman, Tezos | Blockchain Governance&nbsp;</p></li><li><p>Marc Stiegler, Sci-Fi author | The Digital Path</p></li><li><p>Chip Morningstar, Meng Weng, and Federico Ast | Split Contracts, Computational Law &amp; Decentralized Arbitration&nbsp;</p></li><li><p>Matan Field, Esteban Ordano, Jazear Brooks, Tyler Golato, and Patrick Joyce | DAOs</p></li><li><p>Glen Weyl, RadicalxChange | Social Technology for a Political Economy of Increasing Returns&nbsp;</p></li><li><p>Alex Tabarrok, George Mason University | Dominant Assurance Contracts&nbsp;</p></li><li><p>Zooko Wilcox and Howard Wu | Zero-knowledge-enabled Cooperation</p></li><li><p>Jim Epstein, Primavera De Filippi, and Brewster Kahle | A Peaceful Transition into Cryptocommerce?&nbsp;</p></li><li><p>Daniel Ellsberg, DoomsDay Machine | Nuclear Risks: Doomsday (Still) Hiding in Plain Sight&nbsp;</p></li><li><p>David Brin, The Transparent Society | Transparent Society &amp; Sousveillance&nbsp;</p></li><li><p>Gernot Heiser, University of New South Wales| SeL4: Formal Proofs for Real-World Cybersecurity</p></li><li><p>David Krakauer, Santa Fe Institute | Collective Computing</p></li><li><p>Gillian Hadfield, University of Toronto | Incomplete Contracts &amp; AI Alignment</p></li><li><p>Richard Craib, NumerAI | Techniques for Intelligence Coordination <br>Peter Norvig, Google | AI: A Modern Approach</p></li><li><p>Anders Sandberg, Oxford University | Game Theory of Cooperating with Alien Minds</p></li><li><p>Robin Hanson, George Mason University | A Simple Model of Grabby Aliens</p><p></p></li></ul><p>We would also like to thank our book club guests for discussing the book post its publication. Their talks are now also included at the end of each chapter. </p><ul><li><p>David Friedman, Author of Legal Systems Very Different from Ours</p></li><li><p>Robin Hanson, George Mason University</p></li><li><p>Kate Sills, Independent Software Engineer</p></li><li><p>Paul Gebheim, Forecast Foundation</p></li><li><p>Primavera De Filippi, Koala</p></li><li><p>Arthur Breitman, Tezos</p></li><li><p>David Brin, Author of Transparent Society</p></li><li><p>Gernot Heiser, SeL4</p></li><li><p>Juan Benet, Protocol labs</p></li><li><p>Trent McConaghy, Ocean Protocol</p></li><li><p>Stuart Armstrong, Future of Humanity Institute</p><p></p></li></ul><p>Finally, we would like to thank Keith Mansfield, Tom Galloway, Terry Stanley, Chris Hibbert, Alan Karp, Jazear Brooks, David Manheim, Kate Sills, Chip Morningstar, Gillian Hadfield, Robin Hanson, David Friedman, Jim Bennett, Micah Zoltu, and Dan Finlay for extensive comments on the book draft. The key ideas of Paretotopia (at the time) were originally worked out by Mark S. Miller in collaboration with Eric Drexler. We learned a lot and all remaining errors are our own. </p><h1>Join</h1><p>We hope you find interest in critiquing and augmenting the ideas by commenting. This book, like a good game, is here to be iterated and improved for the next round. Follow <a href="https://foresight.org/">Foresight Institute</a> on <a href="https://twitter.com/foresightinst">Twitter</a>.</p><h2><br>Next: <a href="https://foresightinstitute.substack.com/p/foreword">FOREWORD | What&#8217;s at Stake in This Game?</a></h2>]]></content:encoded></item><item><title><![CDATA[Gaming the Future:]]></title><description><![CDATA[Technologies of Intelligent Voluntary Cooperation]]></description><link>https://foresightinstitute.substack.com/p/coming-soon</link><guid isPermaLink="false">https://foresightinstitute.substack.com/p/coming-soon</guid><dc:creator><![CDATA[Allison Duettmann]]></dc:creator><pubDate>Thu, 13 Jan 2022 18:46:18 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/9d76073f-46a9-41c6-86c3-dc39d44fc4f4_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to this living book about technologies for intelligent voluntary cooperation by <a href="https://foresight.org/our-team/allison-duettmann/">Allison Duettmann</a>, <a href="https://papers.agoric.com/authors/mark-s-miller/">Mark S. Miller</a>, and <a href="https://foresight.org/our-team/christine-peterson/">Christine Peterson</a>, <a href="https://foresight.org/">Foresight Institute</a>. </p><h2>The Book </h2><h4><strong>Gaming the Future: Technologies for Intelligent Voluntary Cooperation</strong></h4><p><em>Abstract</em></p><p>Have you ever played <em>Civilization?</em> In the game, you&#8217;re discovering technologies that unlock new levels, one capability at a time. But not all innovations are equal. Better technologies of cooperation could unlock new levels of progress across the board. Opportunities for bright futures enabled by bio, nano, and computing technologies are now within our reach. Their proliferation also comes with risks and authoritarian attempts at control. This book explores how technologies of intelligent voluntary cooperation can help us navigate the traps. Cryptocommerce enables decentralized, secure cooperation across human and computing entities. This unlocks a Paretotropian future of high technology and high freedom.&nbsp;</p><p><em>Table of contents with hyperlinks to chapters</em></p><ol><li><p><a href="http://foresightinstitute.substack.com/p/foreword">FOREWORD | What's at Stake in This Game?</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/overview">OVERVIEW | What to Expect From This Game</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/meet-the-players">MEET THE PLAYERS | Value Diversity</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/skim-the-manual">SKIM THE MANUAL | Intelligent Voluntary Cooperation</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/improve-cooperation">IMPROVE COOPERATION | New Info, Money, Rights, Contracts, Privacy</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/genetic-takeover">GENETIC TAKEOVER | Cryptocommerce</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/defend-physical">DEFEND AGAINST PHYSICAL THREATS | Multipolar Active Shields</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/defend-cyber">DEFEND AGAINST CYBER THREATS | Computer Security</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/new-players">WELCOME NEW PLAYERS | Artificial Intelligences</a></p></li><li><p><a href="http://foresightinstitute.substack.com/p/iterate-game">ITERATE THE GAME | Racing Where?</a></p></li></ol><p>We hope you find interest in critiquing and augmenting the ideas by commenting. This book, like a good game, is here to be iterated and improved for the next round.</p><p><em>Acknowledgements</em></p><p>We would like to thank members of our <a href="https://foresight.org/intelligent-cooperation-videos/">Foresight&#8217;s Intelligent Cooperation Group</a> for shaping this book into what it is through our 2021 seminars:</p><ul><li><p>Robin Hanson, George Mason University | Value Drift</p></li><li><p>Balaji S. Srinivasan | The Network State</p></li><li><p>Dr. Vernon Smith | Theory of Price Discovery in Markets </p></li><li><p>A. McAfee | Civilizational Progress </p></li><li><p>Tyler Cowen, George Mason University | Stubborn Attachments<br>Audrey Tang, Taiwan Digital Minister: Tools for Openness<br>Prediction &amp; Replication Markets, Augur, Metaculus</p></li><li><p>Christine Lemmer-Webber | Randy Farmer | Re-Decentralizing Networked Communities</p></li><li><p>Kate Sills, Agoric | NFTs and Engineering Property Rights<br>Arthur Breitman, Tezos: Blockchain Governance </p></li><li><p>Marc Stiegler, Agoric | The Digital Path</p></li><li><p>Chip Morningstar, Meng Weng, Federico Ast&nbsp;| Split Contracts, Comp. Law &amp; Decentralized Arbitration </p></li><li><p>DAOstack, Decentraland, SifChain, ResearchHub, VitaDAO | DAOs</p></li><li><p>Glen Weyl, RadicalxChange | Social Technology for a Political Economy of Increasing Returns </p></li><li><p>Alex Tabarrok, George Mason University | Dominant Assurance Contracts </p></li><li><p>Zooko Wilcox, ECC, Howard Wu, Aleo | Zero-knowledge-enabled Cooperation</p></li><li><p>Jim Epstein, Primavera De Filippi, Brewster Kahle | Peaceful Transition into Cryptocommerce? </p></li><li><p>Daniel Ellsberg, DoomsDay Machine | Nuclear Risks: Doomsday (Still) Hiding in Plain Sight </p></li><li><p>David Brin,  The Transparent Society | Transparent Society &amp; Sousveillance </p></li><li><p>Gernot Heiser | SeL4: Formal Proofs for Real-World Cybersecurity</p></li><li><p>David Krakauer, Santa Fe Institute | Collective Computing</p></li><li><p>Gillian Hadfield, University of Toronto | Incomplete Contracts &amp; AI Alignment</p></li><li><p>Richard Craib, NumerAI | Techniques for Intelligence Coordination <br>Peter Norvig, Google | AI: A Modern Approach</p></li><li><p>Anders Sandberg, Oxford University | Game Theory of Cooperating with Alien Minds</p></li><li><p>Robin Hanson, George Mason University | A Simple Model of Grabby Aliens</p></li><li><p>The seminars are now incorporated into the text as deep dives</p></li></ul><p>We would especially like to thank Keith Mansfield, Tom Galloway, Terry Stanley, Chris Hibbert, Alan Karp, Jazear Brooks, David Manheim, Kate Sills, Chip Morningstar, Gillian Hadfield, Robin Hanson, David Friedman, Jim Bennett, Micah Zoltu, and Dan Field for extensive comments on the book draft. We learned a lot and all remaining errors are ours.</p><h1>Join Us!</h1><p>Follow <a href="https://foresight.org/">Foresight Institute</a> on <a href="https://twitter.com/foresightinst">Twitter</a> and <a href="https://foresight.org/foresight-discord/">apply to join our Discord</a>.</p><p>Next up: <a href="https://foresightinstitute.substack.com/publish/post/48280668">FOREWORD | What&#8217;s at Stake in This Game?</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://foresightinstitute.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://foresightinstitute.substack.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>