Existential Hope Drop #22: Gus Docker, Future of Life Institute
Beyond Survival: Envisioning a Technologically Enhanced Utopia
In this Hope Drop we are joined by Gus Docker, the host of the Future of Life Institute's podcast – where he interviews a wide variety of prominent researchers, policy experts, philosophers, and influential thinkers of all kinds. With a background in philosophy and computer science, Gus is also involved in Effective Altruism Denmark.
Despite the threat of existential risks from AI, and the difficulty in envisioning the far future's technological changes, Gus is excited about the future. He discusses technologies which could enable diverse experiences and consciousness exploration, as well as benefits to our collective mental health – where individuals easily manage and enhance their mental well-being, harmonizing happiness and productivity, thus contributing to societal welfare.
Explore his vision of the future in this podcast episode and through an AI artwork inspired by his hopeful vision.
Gus' Recommended Resources
The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness – Sharon Hewitt, 2016.
Discussion on The Crazy Train – Ajeya Cotra on the 80,000H podcast, 2021.
Steve Omohundro on Provably Safe AI – FLI Podcast, 2023.
The Precipice – Toby Ord, 2020.
Beyond Survival: Envisioning A Technologically Enhanced Utopia
Gus begins by balancing his excitement about the future with the ever-present threat of Existential Risks – especially from AI. He believes that hopefully, we’ll be able to deal with risks as they come: “AI won't be solved. AI safety will be worked on and we will solve some problems; new problems will arrive, and we will solve those problems. And we hopefully continue doing this until…, indefinitely.”
If we can get past this, he is excited about how AI and other future technologies could shape the future. For example, he imagines technologies that will allow people to sample experiences from others and to try different conscious states and see what they're like, creating a world more open to diverse experiences. When pressed to imagine a eucatastrophe, a significant and positive turning point occurs in the realm of mental health. Imagine a scenario where individuals are able to address their mental issues, even minor ones like a fleeting sadness, without the need for clinical intervention. In this future, people will have access to innovative solutions, possibly through devices in their own homes, that enable them to not only overcome psychological challenges but also to enhance their overall happiness and productivity. As individuals become happier, they find themselves more productive, creating an upward spiral of well-being. This new reality fosters personal strength and empowers people to contribute more effectively to the welfare of others and society. In essence, it's a transformative leap in human mental health and well-being, marked by a harmonious balance of emotional fulfilment and productive engagement in life.
However, he also reflects on the difficulty of imagining wider technological change, especially in future utopias, likening the disparity between our current lives and those in the (not-so-distant) far future to the gap between life in the Stone Age and life today. “I'm not sure I will be able to grasp what's going on in the future if I'm not somehow enhanced in order to follow along with what advanced AI might be doing.”
Related Xhope Library Recommendations:
Which Way is Forward: Value Differences, Drift, and Convergence
Meta Ethics Sequence - Eliezer Yudkowsky. Especially The Moral Void, Whither Moral Progress, Existential Angst Factory, Morality as Fixed Computation, Value is Fragile, Could Anything Be Right, and Changing Your Meta Ethics.
Facing the Unknown, Infinite Ethics - Nick Bostrom. Our epistemic limitations about the (long-term) consequences of our actions are problematic for making the right decision.
Reflective Equilibrium - John Rawls. Another method for handling normative uncertainty. It suggests continuously working back and forth among our moral intuitions about actions, the principles that govern them, and the theoretical considerations behind them, revising them when necessary to achieve coherence among them.
Fundamental Value Differences Are Not That Fundamental, The Whole City is Center, Value Differences as Differently Crystallized Metaphysical Heuristics - Scott Alexander. Argues that human value differences are more shallow than we commonly think and may track the same universal core values that could help us reconstruct a common crude human morality.
Three Worlds Collide - Eliezer Yudkowsky. A novella about how much values can differ across different mind-architectures.
Artificial Intelligence, Values, and Alignment - Iason Gabriel. On AI Alignment but discusses the problem of different people with different values, and how Contractualism, Rights, Rawl’s Veil of Ignorance, or Social Choice Theory may help us reach overlapping consensus.
NEWS FROM THE XHOPE ECOSYSTEM
Apply today to the 2024 Worldbuilding Course – Existential Hope
In this virtual and interactive course, we engage with the most pressing global challenges of our age—climate change, the risks of AI, and the complex ethical questions arising in the wake of new technologies. Our aim is to sharpen participants’ awareness and equip them to apply their skills to these significant and urgent issues.
In this course, you'll craft detailed and sophisticated visions of the world in 2045, with a special focus on integrating AI into our worldbuilds. We will explore how AI, along with other emerging technologies and sciences, will shape our future. This includes looking into the economic frameworks, institutions, and societal values that will support them. You'll gain proficiency in critical and strategic thinking methodologies such as red teaming and forecasting.
Dates & time: Starting February 14, 2024, 6 pm UTC
Location: Remote, across Zoom
Please see our website for a detailed curriculum, other resources, some of our mentors for this course, and to apply.
Aiming for Coordination: The Simon Institute
The Simon Institute is ramping up its efforts to improve the multilateral system’s capacity for the governance of rapid technological change, with a specific focus on AI. They're hoping to contribute to building governance systems that are fit for the 21st century; aiming for their vision where humanity coordinates so that life can flourish.
See their theory of change.
Follow the AXC 2024 Prediction Contest – Metaculus
What began as Scott Alexander publishing yearly predictions on Slate Star Codex and later Astral Codex Ten has evolved into an annual forecasting community tradition. Follow the tournament – with questions which span geopolitics, AI developments, space exploration, and more.
Possibilia Magazine
Possibilia is a literary magazine that publishes optimistic, realistic, scientific fiction. They're bringing you positive visions of the future, in magazine form — expressed through short stories and nonfiction companion pieces, brought to life with curated illustration and cutting-edge design.
Subscribe to their Substack to wait for Issue 0 to be released.
Contributors and collaborators are also wanted: Writers (non-fiction and fiction) and artists submissions here! They are also looking for individuals who can spread the word, individuals with operations or industry experience, or anyone else looking to get involved.
Learn more about our work at existentialhope.com.