I’ve skimmed this post—thanks so much for writing it!
Here’s a quick, rushed comment.
I have several points of agreement:
If we could get more people on board with the goal of EA (i.e., making the biggest positive difference they can), then that would be much better than just seeking out people who already have (or nearly have) that goal.
So it seems worth investing effort now into figuring out how to get people motivated towards this goal.
I agree that the four “reasons why people aren’t joining your introductory EA program” you give are true statements (although I’m less sure they’re the most important things to focus on)
I agree that getting people intrinsically motivated to maximise good seems really valuable if it can be done
But I think I disagree about several important things:
I think it’s true that doing good is beneficial for one’s own life. But I think that the magnitude of your impact matters much less for one’s own sense of purpose, self-approval, etc.
People can live very purposeful & fulfilling lives by picking a cause; being cause-neutral and trying to maximise your positive impact seems if anything slightly less fulfilling, because it means you’ll probably end up working on something that is more neglected and so is less emotionally fulfilling.
I think that helping already-altruistic people to realise that they care about the magnitude of their impact seems more promising than trying to help more people to be altruistic. I think that your program is mostly targeted at the second of these.
I suspect that the way people can end up with the goal of actually maximising good is more like:
Believe that the magnitude of your impact matters, and that bigger is better
Feel that have a large impact is achievable
Feel that doing the EA project is good for my own purposes (makes me feel fulfilled, etc)
Identify as someone that is trying to do the EA project
Feel belonging to a social group that is trying to do the EA project
So I think I’m more keen on projects that focus on helping altruistic people to get on board with the EA project. I’d be very interested in any updates on how your plans go, though!
Thanks for writing up your thoughts Isaac! You present some thought-provoking perspectives that I have not yet considered.
I particularly resonate with your first point of disagreement that individuals can derive personal benefits from being altruistic simply by choosing some cause. Your argument that striving for cause-neutrality and maximizing positive impact may be less fulfilling is a valid one. However, I am unsure why working on a less neglected cause would necessarily be less emotionally fulfilling. In fact, pursuing something “unique” may be quite exciting. Nonetheless, I agree that cause-neutrality may be less fulfilling, as we all have unconscious biases that may favor certain causes due to personal experiences or connections. This may make steering against these inclinations more difficult, perhaps even unpleasant.
I also agree that targeting “already-altruistic people” who care about the magnitude of their impact probably is very promising. Social impact is heavy tailed so it is likely that these individuals could contribute to most of the net impact generated. I just think that EA university groups should not be the stakeholder group that make this trade-off.
In my view, it is important to carefully consider how to differentiate and vary the strategies of EA university, city, and national groups.
With the target audience of university groups being very young adults, I believe it is detrimental to exclude those who may not yet be “there yet”. As I have previously argued, there are many young and ambitious individuals who have not yet determined their life’s direction, and they could be easily nudged towards becoming “already-altruistic”. The loss of counterfactual impact would be huge.
I would agree, however, that for city or national groups, a narrower focus might be a better strategy.
What are your thoughts on having a broader focus for EA university groups, but a narrower one for city groups?
Oh to be clear, I think that almost all altruistic people do not much care about the magnitude of their impact (in practice).
So I think the approach I’d suggest is to focus on altruistic people, and helping them realise that they probably do really care about the magnitude of their impact on reflection.
That’s a much larger group than the people who are already magnitude-sensitive, and I think the intervention is probably more feasible at the moment than for people who have no existing interest in altruism.
I haven’t thought much about strategy for city/national groups, but I think I agree that later in life, people are much more set on their existing path, so if any stage is to focus on people who aren’t altruistic yet, it would be university or high-school groups.
I’ve skimmed this post—thanks so much for writing it!
Here’s a quick, rushed comment.
I have several points of agreement:
If we could get more people on board with the goal of EA (i.e., making the biggest positive difference they can), then that would be much better than just seeking out people who already have (or nearly have) that goal.
So it seems worth investing effort now into figuring out how to get people motivated towards this goal.
I agree that the four “reasons why people aren’t joining your introductory EA program” you give are true statements (although I’m less sure they’re the most important things to focus on)
I agree that getting people intrinsically motivated to maximise good seems really valuable if it can be done
But I think I disagree about several important things:
I think it’s true that doing good is beneficial for one’s own life. But I think that the magnitude of your impact matters much less for one’s own sense of purpose, self-approval, etc.
People can live very purposeful & fulfilling lives by picking a cause; being cause-neutral and trying to maximise your positive impact seems if anything slightly less fulfilling, because it means you’ll probably end up working on something that is more neglected and so is less emotionally fulfilling.
I think that helping already-altruistic people to realise that they care about the magnitude of their impact seems more promising than trying to help more people to be altruistic. I think that your program is mostly targeted at the second of these.
I suspect that the way people can end up with the goal of actually maximising good is more like:
Believe that the magnitude of your impact matters, and that bigger is better
Feel that have a large impact is achievable
Feel that doing the EA project is good for my own purposes (makes me feel fulfilled, etc)
Identify as someone that is trying to do the EA project
Feel belonging to a social group that is trying to do the EA project
So I think I’m more keen on projects that focus on helping altruistic people to get on board with the EA project. I’d be very interested in any updates on how your plans go, though!
Thanks for writing up your thoughts Isaac! You present some thought-provoking perspectives that I have not yet considered.
I particularly resonate with your first point of disagreement that individuals can derive personal benefits from being altruistic simply by choosing some cause. Your argument that striving for cause-neutrality and maximizing positive impact may be less fulfilling is a valid one. However, I am unsure why working on a less neglected cause would necessarily be less emotionally fulfilling. In fact, pursuing something “unique” may be quite exciting. Nonetheless, I agree that cause-neutrality may be less fulfilling, as we all have unconscious biases that may favor certain causes due to personal experiences or connections. This may make steering against these inclinations more difficult, perhaps even unpleasant.
I also agree that targeting “already-altruistic people” who care about the magnitude of their impact probably is very promising. Social impact is heavy tailed so it is likely that these individuals could contribute to most of the net impact generated. I just think that EA university groups should not be the stakeholder group that make this trade-off.
In my view, it is important to carefully consider how to differentiate and vary the strategies of EA university, city, and national groups.
With the target audience of university groups being very young adults, I believe it is detrimental to exclude those who may not yet be “there yet”. As I have previously argued, there are many young and ambitious individuals who have not yet determined their life’s direction, and they could be easily nudged towards becoming “already-altruistic”. The loss of counterfactual impact would be huge.
I would agree, however, that for city or national groups, a narrower focus might be a better strategy.
What are your thoughts on having a broader focus for EA university groups, but a narrower one for city groups?
Oh to be clear, I think that almost all altruistic people do not much care about the magnitude of their impact (in practice).
So I think the approach I’d suggest is to focus on altruistic people, and helping them realise that they probably do really care about the magnitude of their impact on reflection.
That’s a much larger group than the people who are already magnitude-sensitive, and I think the intervention is probably more feasible at the moment than for people who have no existing interest in altruism.
I haven’t thought much about strategy for city/national groups, but I think I agree that later in life, people are much more set on their existing path, so if any stage is to focus on people who aren’t altruistic yet, it would be university or high-school groups.