Foresight Fellow Special | Siméon Campos: On Governing AI for Good
“I think safe AGI can both prevent a catastrophe and offer a very promising pathway into a eucatastrophe.”
This week we are dropping a special episode of the Existential Hope podcast, where we sit down with Siméon Campos, president and founder of Safer AI, and a Foresight Institute fellow in the Existential Hope track. Siméon shares his experience working on AI governance, discusses the current state and future of large language models, and explores crucial measures needed to guide AI for the greater good.
Simeon’s Recommended Resources
Siméon Campos' Website - simeon.ai
Science4All YouTube Channel - Science4All
David Krueger's Work - David’s website
Siméon’s Vision for the Future: Safe AGI Benefitting All
Siméon envisions a eucatastrophe where advanced AI systems are developed with inherent safe capabilities, ensuring they pose no risks. Robust governance through a multi-stakeholder bargaining process is crucial to make sure AGI benefits the majority rather than just a few, as advanced AI could potentially allow unprecedented control over people. This governance can help technology alleviate constraints imposed by genetics or birthplace, enabling individuals to become more like who they wish to be. Moreover, with safe AGI and solid international governance, technology could address major global issues such as poverty, hunger, and disease, creating a hopeful and positive future for humanity.
About the art
This art piece was created with the help of Dall–E 3.
Library Recommendations
What is intelligence? Luke Muehlhauser. Attempts an explanation of intelligence.
Existential Hope Transformative AI Institution Design Hackathon - Foresight Institute. Nine proposals for new governance mechanisms for transformative AI.
Building Safe AI - Andrew Trask. Describes how federated learning could be leveraged to build an AI system that can produce insights based on the encrypted data of two mutually suspicious parties without itself gaining access to the data or leaking any information about its own algorithms.
The Landscape of AI Safety and Beneficence Research - Richard Mallah. Overview of the AI safety landscape.
About the Foresight Fellowship on Existential Hope

The Foresight Fellowship is a one-year program committed to giving change-makers the support to accelerate their bold ideas into the future. The mission is to catalyze collaboration among leading young scientists, engineers, and innovators who work to advance technologies for the benefit of life.
Learn more about our work at existentialhope.com.