What tactics do you think could be more effective at promoting effective altruism? I’ve been thinking about promoting EA practices in a public interest tech context, but I’d be flying blind because I have no idea what’s effective and relatively little experience in PIT overall. One possibility would be to deliver an EA talk at a tech conference such as GHC.
“trying to do things that will really help people, rather than ignoring their needs in favor of what we think will help”
is uncontroversial; I think cause prioritization would be more controversial, though. I wouldn’t be surprised if people working on one cause objected to being told that they’d have more impact working on a different one.
In past Splash courses I’ve taught, I’ve noticed that some students were already familiar with the topic; for example, during my introductory machine learning class, one student asked about a type of neural network that I had heard of but was unfamiliar with. Do you think I’d be mostly preaching to converts?
Don’t worry about “preaching to converts” in your Splash class; I very much doubt many of your students will have any familiarity with EA beyond a passing mention somewhere.
Discussing effective tactics for promoting EA would take a long time. If you want to learn about some things other folks have done, check out the EA Hub’s list of resources or the top community posts on the Forum (not everything at that link will be about promotion, but if you skip around you’ll find some relevant articles).
With cause prioritization (and other topics), you’ll probably be fine as long as you avoid negativity. My framing is never “don’t work on X”; instead, it’s (to paraphrase): “what are you hoping to get by working on X? Does it seem to be working? What led you to working on X rather than other things in the same general area?” My overall message is “everyone sees the world a little differently, but for any way you see the world, there will be some strategies for helping that are likely to work out better than others. Cause prioritization is about figuring out the best thing you can be doing, according to your values.”
Prioritization isn’t exclusive to EA: Other entities do it all the time based on their own values (e.g. environmental agencies trying to weigh policies by how they affect the lives of citizens, but not necessarily people in other countries). EA just has fewer limits on the sorts of ideas it considers, and on which beings we care about helping.
(This is a very rough perspective, and belongs to me rather than my employer, but the point of “work with people’s values, don’t tell them to value other things” stands.)
Thank you—this is really helpful feedback!
What tactics do you think could be more effective at promoting effective altruism? I’ve been thinking about promoting EA practices in a public interest tech context, but I’d be flying blind because I have no idea what’s effective and relatively little experience in PIT overall. One possibility would be to deliver an EA talk at a tech conference such as GHC.
is uncontroversial; I think cause prioritization would be more controversial, though. I wouldn’t be surprised if people working on one cause objected to being told that they’d have more impact working on a different one.
In past Splash courses I’ve taught, I’ve noticed that some students were already familiar with the topic; for example, during my introductory machine learning class, one student asked about a type of neural network that I had heard of but was unfamiliar with. Do you think I’d be mostly preaching to converts?
Don’t worry about “preaching to converts” in your Splash class; I very much doubt many of your students will have any familiarity with EA beyond a passing mention somewhere.
Discussing effective tactics for promoting EA would take a long time. If you want to learn about some things other folks have done, check out the EA Hub’s list of resources or the top community posts on the Forum (not everything at that link will be about promotion, but if you skip around you’ll find some relevant articles).
With cause prioritization (and other topics), you’ll probably be fine as long as you avoid negativity. My framing is never “don’t work on X”; instead, it’s (to paraphrase): “what are you hoping to get by working on X? Does it seem to be working? What led you to working on X rather than other things in the same general area?” My overall message is “everyone sees the world a little differently, but for any way you see the world, there will be some strategies for helping that are likely to work out better than others. Cause prioritization is about figuring out the best thing you can be doing, according to your values.”
Prioritization isn’t exclusive to EA: Other entities do it all the time based on their own values (e.g. environmental agencies trying to weigh policies by how they affect the lives of citizens, but not necessarily people in other countries). EA just has fewer limits on the sorts of ideas it considers, and on which beings we care about helping.
(This is a very rough perspective, and belongs to me rather than my employer, but the point of “work with people’s values, don’t tell them to value other things” stands.)