There are different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc):
“We want to work on solving these important problems. If you care about similar things, let’s work together!”
“We have figured out what the correct things to do are and now we are going to tell you what to do with your life”
It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame.
I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of “community building”. If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they’re not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don’t have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.
It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.
I also have other intuitions for why doing the second thing just doesn’t work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction between the two ways of doing things. I think they are distinct mindsets that lead to very different actions.
There are different ways to approach telling people about effective altruism (or caring about the future of humanity or AI safety etc):
“We want to work on solving these important problems. If you care about similar things, let’s work together!”
“We have figured out what the correct things to do are and now we are going to tell you what to do with your life”
It seems like a lot of EA university group organisers are doing the second thing, and to me, this feels weird and bad. A lot of our disagreement about specific things, like how I feel it is icky to use prepared speeches written by someone else to introduce people to EA and bad to think of people who engage with your group in terms of where they are in some sort of pipeline, is about them thinking about things in that second frame.
I think the first framing is a lot healthier, both for communities and for individuals who are doing activities under the category of “community building”. If you care deeply about something (eg: using spreadsheets to decide where to donate, forming accurate beliefs, reducing the risk we all die due to AI, solving moral philosophy, etc) and you tell people why you care and they’re not interested, you can just move along and try to find people who are interested in working together with you in solving those problems. You don’t have to make them go through some sort of pipeline where you start with the most appealing concepts to build them up to the thing you actually want them to care about.
It is also healthier for your own thinking because putting yourself in the mindset of trying to persuade others, in my experience, is pretty harmful. When I have been in that mode in the past, it crushed my ability to notice when I was confused.
I also have other intuitions for why doing the second thing just doesn’t work if you want to get highly capable individuals who will actually solve the biggest problems but in this comment, I just wanted to point out the distinction between the two ways of doing things. I think they are distinct mindsets that lead to very different actions.