Hey guys, awesome work from what I’ve heard. What was your response to “Who are you to tell people how to live their lives?” and pre-existing negative ideas about EA? It seems quite positive to know you ended those conversations warmly, and would be interesting to know more about what you said.
(feel free to link if it was purely following something apart written up though)
I guess my approach was something like the following: Outlook: 1. Treat it as an opportunity to make connection with the person (not like you’re trying to debate or “convert” them). 2. Be curious about their experience. This includes asking questions to get more to the heart of what they feel is wrong with EA, letting them voice it, and repeating back to them some of what you heard so they feel understood. 3. Give my own personal account, rather than “trying to represent EA”. Hopefully this humanizes me, and EA by association, in their eyes. 4. Look out for where they might have misconceptions about EA, or negative gut feelings, that would lead to their concerns, and focus somewhat on reframing that conception.
Example: One interaction went something like the following, with person (P) and myself (M): P: Why are you guys here? M: We’re here to try to figure out how to improve the world! P: Oh I’ve heard of effective altruism before. But like, what gives you the right to tell people how to live their lives? M: (sensing from tone, making eye contact, and sincerely curious) Oh are you concerned that an empirical approach to these questions might not be the best way to help people? P: Well, like, EA just tells people what to do right? M: I guess I think of it more like we only have limited resources, and IF WE WANT to help people we should think carefully about the OPPORTUNITY and use evidence to try to help them. P: Hmm (body language eases). M: Yeah… Like, I think it’s important to make sure when we are trying to improve the world that what we’re doing is actually going to help. P: Hmm. M: Yeah… would you like a lollipop? P: Uh nah I think I have to go. Thanks. M: No worries. Nice to meet you. P: Yeah you too.
The above isn’t revelatory. But it seemed to me that the person’s conception shifted from EA as a kind of enemy entity, more towards seeing EAs as friendly people who it’s possible to make personal connections with, and who want to make the world better using evidence.
In another example, we were curious about someone else who came up, voted for AI, and said something like “Oh I’m not really on board with this whole approach” and went to leave. We asked them something like “Oh really, what don’t you like?” It turned out that they knew HEAPS about EA, and our curiosity about their points of disagreement led to a really fun discussion, to the point where they said “well it’s nice to see you guys on campus” near the end, and where we wanted to keep talking to them.
Hey guys, awesome work from what I’ve heard. What was your response to “Who are you to tell people how to live their lives?” and pre-existing negative ideas about EA? It seems quite positive to know you ended those conversations warmly, and would be interesting to know more about what you said.
(feel free to link if it was purely following something apart written up though)
Hi Bettsy,
I guess my approach was something like the following:
Outlook:
1. Treat it as an opportunity to make connection with the person (not like you’re trying to debate or “convert” them).
2. Be curious about their experience. This includes asking questions to get more to the heart of what they feel is wrong with EA, letting them voice it, and repeating back to them some of what you heard so they feel understood.
3. Give my own personal account, rather than “trying to represent EA”. Hopefully this humanizes me, and EA by association, in their eyes.
4. Look out for where they might have misconceptions about EA, or negative gut feelings, that would lead to their concerns, and focus somewhat on reframing that conception.
Example:
One interaction went something like the following, with person (P) and myself (M):
P: Why are you guys here?
M: We’re here to try to figure out how to improve the world!
P: Oh I’ve heard of effective altruism before. But like, what gives you the right to tell people how to live their lives?
M: (sensing from tone, making eye contact, and sincerely curious) Oh are you concerned that an empirical approach to these questions might not be the best way to help people?
P: Well, like, EA just tells people what to do right?
M: I guess I think of it more like we only have limited resources, and IF WE WANT to help people we should think carefully about the OPPORTUNITY and use evidence to try to help them.
P: Hmm (body language eases).
M: Yeah… Like, I think it’s important to make sure when we are trying to improve the world that what we’re doing is actually going to help.
P: Hmm.
M: Yeah… would you like a lollipop?
P: Uh nah I think I have to go. Thanks.
M: No worries. Nice to meet you.
P: Yeah you too.
The above isn’t revelatory. But it seemed to me that the person’s conception shifted from EA as a kind of enemy entity, more towards seeing EAs as friendly people who it’s possible to make personal connections with, and who want to make the world better using evidence.
In another example, we were curious about someone else who came up, voted for AI, and said something like “Oh I’m not really on board with this whole approach” and went to leave. We asked them something like “Oh really, what don’t you like?” It turned out that they knew HEAPS about EA, and our curiosity about their points of disagreement led to a really fun discussion, to the point where they said “well it’s nice to see you guys on campus” near the end, and where we wanted to keep talking to them.