What could AI look like in 2035? Two visions for AI’s future
With contributions from Vitalik Buterin, Adam Marblestone, Allison Duettmann, Glen Weyl, Christine Peterson, and more
Today, Foresight Institute’s Existential Hope program is launching AI Pathways, the result of months of work across two in-depth scenario reports designed to open up the meme space of what different AI futures could look like.
Rather than prescribing a single preferred outcome, the reports explore two different plausible futures, shaped by the choices we make and the systems we choose to build.
The Tool AI Pathway
A future shaped by powerful but controllable AI systems with limited agency. This scenario explores the idea that many benefits often associated with AGI could instead be achieved through advanced, tool-like systems. It asks: what if we focus on scaling such systems, designed to assist rather than act independently, in ways that are both safe and effective?
The d/acc Pathway
A future shaped by decentralized, democratic, and defensive acceleration, where coordination technologies drive progress across science, governance, and infrastructure. This scenario builds on growing interest in bottom-up and resilience-focused approaches to technological acceleration. While the concept is gaining traction, it has often remained abstract, especially given its intentionally plural nature. Here, we aim to make it concrete: what might d/acc look like in practice?
Why we’re doing this:
Much of today’s discussion around AI futures tends to focus on a few high-profile trajectories, often involving AGI, short timelines, or centralized control. But those aren’t the only possibilities.
We chose these two scenarios because they represent directions that are often mentioned, but rarely explored in detail:
Tool AI, as a path that might deliver major benefits without the risks of developing AGI
d/acc, as a decentralized and plural approach to progress and with a strong emphasis on defensive technologies
Our hope is that by making these options more concrete, we can help broaden the range of paths being considered, and support deeper reflection on which ones might be more or less worth pursuing.
Metaculus integration
To invite deeper discussion around the scenarios, we’ve partnered with Metaculus to launch a set of forecasting questions based on key milestones in each future. Alongside this, we’re running a $5,000 Commenting Prize.
How they were created:
Each scenario is designed to be plausible given specific conditions. The goal is to make these futures more tangible and discussable, while leaving room for critique and iteration.
Both reports are written by Linda Petrini and Beatrice Erkers. They were developed through expert interviews and multiple rounds of feedback on the written reports. The scenarios reflect a synthesis of many perspectives, and they shouldn’t be taken as endorsements or official positions of any individual listed below.
Contributors (interview and feedback participants):
d/acc Pathway:
Vitalik Buterin (Ethereum), Glen Weyl (Microsoft Research, RadicalXchange), Kevin Owocki (Gitcoin), Andrew Trask (OpenMined, DeepMind), Emilia Javorsky (Future of Life Institute), Deger Turan (Metaculus), Allison Duettmann (Foresight Institute), Soham Sankaran (PopVax), Christine Peterson (Foresight Institute), Marcin Jakubowski (Open Source Ecology), Naomi Brockwell (Ludlow Institute), Molly Mackinlay (Protocol Labs), Lou de Kerhuelvez (Nodes).
Tool AI Pathway:
Adam Marblestone (Convergent Research), Anton Korinek (University of Virginia), Anthony Aguirre (Metaculus, Future of Life Institute), Saffron Huang (Anthropic), Joel Leibo (DeepMind), Rif A. Saurous (Google), Cecilia Tilli (Cooperative AI Foundation), Ben Reinhardt (Speculative Technologies), Bradley Love (Los Alamos National Laboratory), Konrad Kording (University of Pennsylvania), Jeremy Barton (Nano Dynamics Institute), Owen Cotton-Barratt (Researcher), Kristian Rönn (Lucid Computing).
Videography by Petr Salaba (AI-generated).
We’re deeply grateful to anyone who contributed their time and insights to this experiment.
How you can engage:
Explore the scenarios
Discuss the Metaculus questions (with a $5K prize pool)
Share with others thinking about AI strategy
We’re also publishing follow-up content over the coming months, podcast episodes, and more scenario materials, and we’d love to collaborate or cross-post where useful.