via wikipedia
Eliezer Yudkowsky is an American rationalist, writer, and AI safety researcher. He is known for his work on the potential risks and impacts of advanced artificial intelligence (AI) systems.
Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a non-profit organization focused on developing safe and beneficial artificial general intelligence (AGI). He is also a research associate at the Future of Humanity Institute at the University of Oxford, where he contributes to the study of existential risk and the long-term future of humanity.
Yudkowsky's work has focused on the potential risks posed by advanced AI systems, including the possibility of a superintelligent AI system that could pose an existential threat to humanity. He has written extensively on this topic, including the development of the "Sequences," a set of online essays that explore the foundations of rationality and the potential challenges of AI alignment. Yudkowsky holds a Bachelor of Arts degree, but does not have formal academic training in computer science or artificial intelligence.
Bio source: claude
| Updated: 2025-12-31 06:18