"Recursive Self-Improvement and the World's Most Important Math Problem." In 1965, I. J. Good suggested that smarter minds can make themselves even smarter, leading to a runaway positive feedback that I. J. Good termed he "intelligence explosion". But how do you build an Artificial Intelligence such that it remains stable and friendly through the ascent to superintelligence? Eliezer Yudkowsky talks about the implications of recursive self-improvement, and how it poses the most important math problem of this generation.
This one is going to be really interesting, because Eliezer has thought deeply about Artificial Intelligence and what will happen when machine intelligence is surpassing our own in the not so distant future for a very long time and it scares the hell out of a lot of people, one more reason for us to take a closer look. Please add your links of information sources to the companion Wiki page.
Eliezer Yudkowsky is one of the foremost thinkers on the Singularity. He is a cofounder and current Research Fellow of the Singularity Institute for Artificial Intelligence. Alongside Artificial Intelligence, Yudkowsky's interests include Bayesian probability theory, Bayesian decision theory, human rationality, and evolutionary psychology.
Yudkowsky is author of the papers: Levels of Organization in General Intelligence and Creating Friendly AI.
A Future Salon has the following structure: 6-7 networking with light refreshments proudly sponsored by SAP. From 7-9+ pm presentation and discussion. SAP Labs North America, Building D, Room Southern Cross, 3410 Hillview Avenue, Palo Alto, CA 94304. As always free and open to the public. Improve your commute by sharing it with a fellow Futurist. Check the Ride Board for opportunities. Free and open to the public. Please RSVP http://tinyurl.com/9rq8p, so we can get enough food and drinks.
For the people that can't make it we are Webcasting point your quicktime player to:
IRC chat as usual:
Added by finnern on February 23, 2006