Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He’s done tons of talks and podcasts—not sure which is best, but if 3 hours of heavy content isn’t a problem, the 80k one is good.
There’s already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.
Bonus: here’s what I told my mum.
AIs are getting better quite fast, and we will probably eventually get a really powerful one, much faster and better at solving problems than people. It seems really important to make sure that they share our values; otherwise, they might do crazy things that we won’t be able to fix. We don’t know how hard it is to give them our actual values, and to assure that they got them right, but it seems very hard. So it’s important to start now, even though we don’t know when it will happen, or how dangerous it will be.
Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He’s done tons of talks and podcasts—not sure which is best, but if 3 hours of heavy content isn’t a problem, the 80k one is good.
There’s already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.
Bonus: here’s what I told my mum.