Advancing Safe AI: Neurotech, BCI, and WBE for AI Alignment
Explore the potential future of safe AI in Foresight’s new report by Naveen Rao, highlighting neurotech, BCI, and WBE insights from the 2024 Neurotech Workshop hosted by Foresight Institute.
As AI continues to advance rapidly, the fields of whole brain emulation (WBE) and brain-computer interfaces (BCI) are becoming increasingly crucial in shaping the future of human-AI interaction and ensuring AI safety. Foresight is releasing a new report that explores the developments in these areas, focusing on their potential to align artificial general intelligence (AGI) with human values, sharing the insights from our recent workshop on the topic.
In May 2024, 60 experts convened in Berkeley, California for a two-day workshop chaired by Allison Duettmann (Foresight Institute) and Anders Sandberg (Institute for Futures Studies) to explore the potential of WBE and neurotechnologies to guide alignment in AGI.
The workshop created a unique forum to share and cross-pollinate ideas, hold critical discussions, and establish common knowledge from which to build consensus.
Key highlights include:
WBE for Aligned AGI: While Artificial Intelligence has progressed dramatically over the last decade, 2024 may be remembered as the year that “AGI” became a feature, not a bug, of corporate R&D efforts. Calls for safety and concern have taken a backseat to competitive pursuit. Interest in creating aligned AGI may not be sufÏcient without a clearer strategy for how alignment might be achieved. Whole Brain Emulation is a concept that holds the potential to advance our understanding of human nature in such a way that enables alignment.
Defining WBE and Its Challenges: In 2008, in a landmark WBE Roadmap report, Anders Sandberg defined WBE as ”the possible future one‐to‐one modeling of the function of the human brain.” And yet, WBE remains a theoretical construct, as technology has only recently advanced to the point where such models are plausible. Key challenges to the successful emulation of the human brain include a lack of consensus on the best path forward, compounded by uncertainty about both the scale of funding and timelines required for each potential pathway.
Pathways to Aligning AGI
Hi-Fi Approaches: Advances in technological discovery in hardware and software are creating an emerging opportunity to create a digital representation of the human brain. With synergies between breakthroughs in brain mapping, neural networks, understanding neuronal structure and
function, the possibility of creating a digital brain of high fidelity is becoming less theoretical.
Low-Fi Approaches: Low Fidelity approaches seek to build pathways towards alignment that originate from existing foundational research and development in areas of connectomics, evolutionary biology, and artificial intelligence. Incorporating our emerging understanding of neural dynamics, research frontiers with animal models, novel virtual neuroscience methods, or other advances, proponents of Lo-Fi pathways suggest a faster, more cost-effective strategy is best.
BCI-Based approaches: While today’s implanted neural devices represent an early effort at brain-computer interfaces, advances are emerging. In mid-2024, over half a dozen companies had registered their products with the FDA for pursuit of regulatory clearance, via the investigational device exemption (IDE) pathway. One of those companies, Neuralink, has openly stated their long-term focus of human enhancement, while another, Synchron, recently announced live integrations with Apple and OpenAI in one of their trial participants to use consumer technologies via BCI control. Despite this pace of innovation, BCI and other in-vivo enhancement represent an expensive, time-consuming, and high-risk pathway when viewed through a lens of AGI alignment.
Prosocial Approaches: Incorporating humanity’s values vis a vis emergent understanding of self-modeling, empathy, interpersonal psychology, and other pro-social attributes into AI represents another pathway to alignment. These approaches represent a lower-cost, faster approach to consider.
All talks and presentations from the workshop can be viewed here.
Foresight Institute’s Upcoming WBE Grants Program
Beyond convening experts for critical discussions around the world, The Foresight Institute has created several novel funding mechanisms to directly drive further development of promising research projects to advance AI Safety. These include dedicated programs in Neurotechnology and WBE, summarized below, as well as Security and Cryptography, and Multi-agent simulations.
These programs are supported through an endowment designed to disburse annual grants, while supporting a grand prize award for achievement of WBE. With several paths to WBE development gradually emerging, the WBE grants program is designed to encourage parallel experimentation with a multitude of approaches to support the growth of this nascent, yet heterogeneous research community.
To learn more about supporting the endowment via tax-deductible donations or getting involved, please contact Niamh Peren, Chief of Strategy at Foresight Institute: niamh@foresight.org.