A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less “weird” things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by talking about misuse risks first in order to be more convincing. This would have been an okay thing to do if those were their actual beliefs. But in a couple of cases, this was an intentional thing to warm people up to the “crazy” idea that AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their actual beliefs, this feels like a good thing to me. But I’m also worried that people still won’t just directly state their beliefs and instead still continue to play persuasion games with new people but about different things.
Eg: one way this could go wrong is group organisers try to make it seem to new people like they’re more confident about what interventions within AI safety are helpful than they actually are. Things like: “Oh hey you’re concerned about this problem, here are impactful things you can do right away such as applying to this org or going through this curriculum” when they are much more uncertain (or should be?) about how useful the work done by the org is or how correct/relevant the content in the AI safety curriculum is.
I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you’ve outlined.
I don’t like the idea of bringing people into EA based on false premises. It feels weird to me to ‘hide’ parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, “EA is a movement concerned with finding the most impactful careers and charitable interventions,” to many people I think this automatically comes across as concerning issues of global health and poverty. ‘Altruism’ is in the name after all. I don’t think many people associated the word ‘altruism’ with charities aimed at ensuring that artificial intelligence is safe.
If I forefront concerns about AI and say, “EA is a movement aimed at finding the most impactful interventions… and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe,” that also feels like it’s not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues.
The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously.
You might say that these are really simplistic ways of talking about EA, and there’s a lot more than I could say than a couple simple sentences. That’s true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I’ve run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I’ve interacted with friends or family members who don’t know what EA is, most of the time I get the sense that they don’t want a whole spiel.
I also think it is not necessarily a ‘persuasion game’ to think about how to bring more people on board with an idea—it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as ‘directly stating their actual beliefs.’ Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, “I believe there’s a decent chance AI could kill us all,” or I could say, “I believe that we aren’t taking the risks of AI seriously enough.” Both of these are communicating a similar idea, but will be taken quite differently.
I found it really difficult to reply to this comment, partly because it is difficult for me to inhabit the mindset of trying to be a representative for EA. When I talk to people about EA, including when I was talking to students who might be interested in joining an EA student group, it is more similar to “I like EA because X, the coolest thing about EA for me is Y, I think Z though other people in EA disagree a bunch with my views on Z for W reason and are more into V instead” rather than trying to give an objective perspective on EA.
I’m just really wary of changing the things I say until it gets people to do the thing I want (sign up for my student group, care about AI safety, etc.) There are some situations when that might be warranted like if you’re doing some policy-related thing. However, when running a student group and trying to get people who are really smart and good at thinking, it seems like the thing I’d want to do is just to state what I believe and why I believe it (even and especially if my reasons sound dumb) and then hearing where the other person agrees or disagrees with me. I don’t want to state arguments for EA or AI safety to new members again and again in different ways until they get on board with all of it, I want us to collaboratively figure things out.
A few months ago I felt like some people I knew within community building were doing a thing where they believed (or believed they believed) that AI existential risk was a really big problem but instead of just saying that to people (eg: new group members), they said it was too weird to just say that outright and so you had to make people go through less “weird” things like content about global health and development and animal welfare before telling them you were really concerned about this AI thing.
And even when you got to the AI topic, had to make people trust you enough by talking about misuse risks first in order to be more convincing. This would have been an okay thing to do if those were their actual beliefs. But in a couple of cases, this was an intentional thing to warm people up to the “crazy” idea that AI existential risk is a big problem.
This bothered me.
To the extent that those people now feel more comfortable directly stating their actual beliefs, this feels like a good thing to me. But I’m also worried that people still won’t just directly state their beliefs and instead still continue to play persuasion games with new people but about different things.
Eg: one way this could go wrong is group organisers try to make it seem to new people like they’re more confident about what interventions within AI safety are helpful than they actually are. Things like: “Oh hey you’re concerned about this problem, here are impactful things you can do right away such as applying to this org or going through this curriculum” when they are much more uncertain (or should be?) about how useful the work done by the org is or how correct/relevant the content in the AI safety curriculum is.
I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you’ve outlined.
I don’t like the idea of bringing people into EA based on false premises. It feels weird to me to ‘hide’ parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, “EA is a movement concerned with finding the most impactful careers and charitable interventions,” to many people I think this automatically comes across as concerning issues of global health and poverty. ‘Altruism’ is in the name after all. I don’t think many people associated the word ‘altruism’ with charities aimed at ensuring that artificial intelligence is safe.
If I forefront concerns about AI and say, “EA is a movement aimed at finding the most impactful interventions… and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe,” that also feels like it’s not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues.
The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously.
You might say that these are really simplistic ways of talking about EA, and there’s a lot more than I could say than a couple simple sentences. That’s true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I’ve run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I’ve interacted with friends or family members who don’t know what EA is, most of the time I get the sense that they don’t want a whole spiel.
I also think it is not necessarily a ‘persuasion game’ to think about how to bring more people on board with an idea—it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as ‘directly stating their actual beliefs.’ Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, “I believe there’s a decent chance AI could kill us all,” or I could say, “I believe that we aren’t taking the risks of AI seriously enough.” Both of these are communicating a similar idea, but will be taken quite differently.
Thank you for sharing your thoughts here.
I found it really difficult to reply to this comment, partly because it is difficult for me to inhabit the mindset of trying to be a representative for EA. When I talk to people about EA, including when I was talking to students who might be interested in joining an EA student group, it is more similar to “I like EA because X, the coolest thing about EA for me is Y, I think Z though other people in EA disagree a bunch with my views on Z for W reason and are more into V instead” rather than trying to give an objective perspective on EA.
I’m just really wary of changing the things I say until it gets people to do the thing I want (sign up for my student group, care about AI safety, etc.) There are some situations when that might be warranted like if you’re doing some policy-related thing. However, when running a student group and trying to get people who are really smart and good at thinking, it seems like the thing I’d want to do is just to state what I believe and why I believe it (even and especially if my reasons sound dumb) and then hearing where the other person agrees or disagrees with me. I don’t want to state arguments for EA or AI safety to new members again and again in different ways until they get on board with all of it, I want us to collaboratively figure things out.