Existential Hope Drop # 27: Roman Yampolskiy: The Case for Narrow AI
AI holds great promise, but can we control it? Dr. Yampolskiy thinks we can, but only by keeping it narrow.
This month, we’re proud to feature Dr. Roman Yampolskiy, a leading voice in AI Safety. Dr. Yampolskiy stands out in the AI safety community by advocating for narrow, task-oriented AI systems.
“I'm arguing that it's impossible to indefinitely control superintelligent systems,” he cautions.
Despite the challenges, Dr. Yampolskiy is optimistic about the future. He believes that narrow AI can effectively help address complex issues in everything from politics and relationships, to longevity and health.
Roman’s Recommended Resources
AI Unexplainable, Unpredictable, Uncontrollable – Roman Yampolskiy, 2024
The writings of Eliezer Yudkowsky on LessWrong
Roman’s Vision for the Future
Despite his concerns about AGI, Yampolskiy sees a brighter future powered by narrow AI systems. He envisions a world where we can tackle specific, crucial problems without the risks of uncontrollable AI.
Imagine a world where you have a personal financial advisor, relationship coach, and other tailored guidance available at your fingertips, all powered by narrow AI systems. Dr. Yampolskiy envisions this technology revolutionizing our daily lives, making expert advice accessible to everyone. But it doesn't stop there—he also suggests that narrow AI could transform politics, leading to more direct representation and efficient governance, or even more monumental challenges, like curing aging!
Ultimately, Dr. Yampolskiy envisions a future where AI safety concerns are resolved, freeing humanity to explore new frontiers of knowledge and experience. In this optimistic scenario, we wouldn't be consumed by worries about technological threats. Instead, we’d be empowered to push the boundaries of human potential, channeling our energies into innovation and discovery rather than self-preservation.
About the art
This art piece was created with the help of Dall–E 3.
Xhope Library Recommendations
INTELLIGENCE RISKS: TAKEOFF AND ALIGNMENT
Information Security Concerns for AI & The Long-term Future - Jeff Ladish. Introduces information security as a crucial problem that is currently undervalued by the AI safety community.
AGI Ruin: A List of Lethalities - Eliezer Yudkowsky. Forty-three reasons that make Yudkowsky pessimistic about our world being able to solve AGI safety.
AI Alignment & Security - Paul Christiano. On how the relationship between security and alignment concerns is underappreciated.
Superintelligence: Coordination & Strategy - Roman Yampolskiy, Allison Duettmann. Collection of papers on challenges and strategies for ensuring a cooperative development of advanced AI.
Community Updates
Edge Esmeralda: A Pop-Up Village for a Better Future
Edge Esmeralda, an inspiring "popup village" for those dedicated to creating a better future, is now coming to an end in a few days. This event has provided a space to dive deep into work, learn from experts, and incubate novel technologies and ways of living. Serving as a prototype for the permanent town of Esmeralda, the insights gained here will shape this long-term vision. Stay tuned for future events as the journey to build a better tomorrow continues.
A Newsletter Celebrating Human Progress
Unfortunately, there is often a wide gap between the reality of human experience, which is characterized by incremental improvements, and public perception, which tends to be quite negative about the current state of the world and skeptical about humanity’s future prospects.
This newsletter, brought to you by Human Progress, aims to narrow that gap.
With a weekly progress roundup, as well as in-depth, data-backed articles, this newsletter cuts through the gloom of negative news.
2024 Progress Conference: Toward Abundant Futures
Friday, October 18th, 9am – Saturday, October 19th, 5pm.
Presented by the Roots of Progress, together with the Foresight Institute, Works in Progress, the Institute for Progress, and the Institute for Humane Studies, this conference is to connect people and ideas in the progress movement. The aim is to see more scientific, technological, and economic progress for the good of humanity, and envision a bold, ambitious, flourishing future.
Speakers include Patrick Collison, Tyler Cowen, and Steven Pinker.
Effective Altruism versus Progress Studies: Clara Collier In Discussion with Jason Crawford
And I, too, am wary: not of too much optimism, but of the wrong kind. I am wary of complacent, passive optimism: the assumption that progress is inevitable or automatic, or that it is automatically good. I want us to have an active optimism — “optimism of the will” — the energy, courage, and determination to work for a better future, to create it through choice and effort, to embrace whatever problems or risks arise and to solve them. Hopefully that’s something both communities can agree on.
- Jason Crawford, for Asterisk
Learn more about our work at existentialhope.com.