[Question] How to explain AI risk/​EA concepts to family and friends?

Background: I’m an undergraduate CS major. Recently, I’ve mentioned to my mom that I’ve been getting involved in the “effective altruism” community, and I’ve been expressing an increased interest in getting a PhD. The other day, my mom asked me why exactly I wanted a PhD.

Me: Well, I want to help others as much as possible.

Mom: Okay, how are you going to help people with a PhD?

Me: Well, I don’t know… maybe try to reduce existential risks...

Mom: Whoa, existential risks?

Me: Uh, I don’t know, I mean, maybe it wouldn’t be that bad, but it seems likely that AI will be very important in the future. And if AI has good goals that match up with the goals of humans, they could solve lots of the world’s problems, so I really want to increase the odds of that happening.

Mom: So what’s going to happen if AIs don’t have good goals?

Me: Well, I guess… they could kill off humanity?

Mom: Whoa!

Fortunately, we moved on in the conversation at this point, but I don’t think I gave her the best first impression of these ideas. Does anyone know of any good articles or videos for a popular audience that present the AI alignment problem in moderate depth, without too much sensationalism? I’m sure there are people who would do a much better job than me at explaining these concepts to my mom. Similarly, content on EA concepts in general would be helpful.

It’s most important to me to convince my mom that what I’m doing is worthwhile, but I also want to be able to talk about my career plans with non-EAs without them thinking I’ve joined a Doomsday cult. For people working in existential risk and other “weird” areas—how do you usually talk about your work when it comes up in conversation?

No comments.