EA Hamburg Session – Uncontrollable AI

You want to learn more about Effective Altruism? Then come along to our monthly input sessions with workshops, talks and discussions! We will meet at the Bucerius Law School in Room 1.11.

Just drop us a line if you want to participate and have not been in contact with us before.


Is uncontrollable AI really an existential risk?
moderated by Karl von Wendt

AI is developing rapidly and we may achieve human-level general intelligence within a decade or two. Some effective altruists worry that we might run into an existential catastrophe if we don’t solve the “alignment problem” soon. At the same time, most people outside the EA and AI Safety communities ignore the problem completely, classifying it as “science fiction” and comparing fears about it to worrying about “overpopulation on Mars”. So is there really a problem, or are some of us just trapped in some kind of hysterical bubble? And if it is real, what can we do to better understand and mitigate the risk? Let’s discuss!

You can find more info about uncontrollable AI in Karl’s post on Lesswrong and on 80,000 hours.