Existential Hope Special With Future of Life Institute on Worldbuilding - Foresight Institute
Special Hope Drop with the Future of Life Institute
Welcome to Foresight’s second ever Hope Drop, a monthly drop that is part of our Existential Hope project, and that includes a new podcast episode on existential hope, NFT artworks and X-Hope bounties. In this month’s episode of the podcast we interview Anthony Aguirre & Anna Yelizarova from the Future of Life Institute (FLI) about their current Worldbuilding challenge.
FLI is welcoming entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence. Last day to enter your submissions is April 15th, 2022.
In this interview we talk about the concept of worldbuilding, what we need to build better worlds, and why it is important to encourage the imagining of more positive visions of the future.
Imagine an event where we collectively agree on more ethical practices around the proceeds of AI, so that all people can have their needs met from society.
We asked Anna Yelizarova if she could come up with an example of a potential eucatastrophe, this is what she said:
To me, a eucatastrophe is not just something very good happening out of the blue. It’s in storytelling: Everything is at the brink of collapse. Things are about to end very badly for all the characters we care deeply about. And then suddenly, there’s a knight in shining armor with an army that comes and saves us. I think that’s what Tolkien was referring to as a eucatastrophe.
Thinking about how this storytelling tool would look in relation to AI, it would be a lot of people that are fed up with their needs not being met by society, frictions building up, and perhaps even some agreement is breached. Then the eucatastrophe is an event that addresses the needs of the people who are complaining.
The paper “The Windfall Clause” puts forward this idea of an agreement where all companies that are building AGI agree beforehand that if they build an AI that accrues more than a certain percentage of the global GDP, their money will be reallocated in a trust and it will be communally decided on how this money is spent.
So the eucatastrophe would be an event where we collectively agree that enough is enough. Where we break out of this paradigm and become bigger people in a scenario where nobody would expect this to happen. Where you’re pleasantly surprised and we put a stop to the machinery and agree on more ethical practices around the proceeds of AI.
Describe a day in the life when we have an event where humanity collectively agrees on more ethical practices around the proceeds of AI, so that all people can have their needs met by society.
Submit your bounty-response for your chance to be rewarded 0.15 ETH!
See the full animation of the NFT depicting Annas’s eucatastrophe scenario here to get inspired! The proceeds from the NFT sale will support scientific research.
Imagine widespread availability of personal AI assistants without self interest that empowers and assists individuals in achieving their goals
We asked Anthony Aguirre if he could come up with an example of a potential eucatastrophe, this is what he said:
Imagine a very powerful AI system that doesn’t have self interest, that works for you personally in advancing your own goals and interests. This would empower individuals, and would use high powered information technologies for people rather than against them. It would be a personal assistant deluxe: A loyal AI assistant.
It would get to know you really well, and then help you achieve your goals. Rather than getting to know you really well so it can manipulate you like some other systems may do. The eucatastrophe would be to have widespread availability of these high powered, loyal AI assistants.
Very high powered ones could help you do science or solve difficult problems, and other ones can just be there to help you with your everyday life and navigating this incredibly complicated world that we’re in that is full of information dynamics, and frankly things that are oftentimes trying to take advantage of you.
This would be a companion that wouldn’t be annoying such as Siri or Alexa, because you wouldn’t feel like you’re in conflict with it all the time. It doesn’t run counter to your complicated desires and goals, but is explicitly constructed to help you figure out what those complicated desires and goals are, and realize them on an individual basis.
Describe a day in the life when we have widespread availability of personal AI assistants without self interest that empowers and assists individuals in achieving their goals.
Submit your bounty-response for your chance to be rewarded 0.15 ETH!
See the full animation of the NFT depicting Anthony’s eucatastrophe scenario here to get inspired! The proceeds from the NFT sale will support scientific research.
Join the XHope Community!
If you’d like to support this effort, we gratefully receive donations dedicated to “Existential Hope” at our parent non-profit Foresight Institute here.