I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it’s worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.
This is interesting. Do you feel that motivation is a bigger factor for you in this advice as opposed to increasing the variance of efforts for doing good as a way of doing more good?
I am not sure in what contexts you give this advice, but I worry that in some cases it might be inappropriate. Say in cases where people’s gut feelings and immediate intuitions are clearly guiding them in non-effective altruistic directions.
I’d prefer a norm where people interested in doing the most good would initially delegate their decisions to people who have thought long and hard on this topic, and if they want to try something else they should elicit feedback from the community. At least, as long as the EA community also has a norm for being open to new ideas.
Fair point. It’s mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.
I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it’s worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.
This is interesting. Do you feel that motivation is a bigger factor for you in this advice as opposed to increasing the variance of efforts for doing good as a way of doing more good?
I am not sure in what contexts you give this advice, but I worry that in some cases it might be inappropriate. Say in cases where people’s gut feelings and immediate intuitions are clearly guiding them in non-effective altruistic directions.
I’d prefer a norm where people interested in doing the most good would initially delegate their decisions to people who have thought long and hard on this topic, and if they want to try something else they should elicit feedback from the community. At least, as long as the EA community also has a norm for being open to new ideas.
Fair point. It’s mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.