Speaking for myself, I think I have strong ideological reasons to think that predictably doing (lots of) good is possible.
I also have a bias towards believing that things that are good for X reason are also good for Y reason, and this problem rears itself up even when I try to correct for it. E.g. I think Linch(2012-2017) too easily bought in to the “additional consumption $s don’t make you happier ” narratives, and I’m currently lactovegetarian even though I started being vegetarian for very different reasons than what I currently believe to be the most important. I perceive other EAs as on average worse at this (not sure the right term. Decoupling?) than me, which is not necessarily true of the other biases on this list.
A specific instantiation of this is that it’s easier for me to generate solutions to problems that are morally unambiguous by the standards of non-EA Western morality, even though we’d expect the tails to come apart fairly often.
To a lesser extent, I have biases towards thinking that doing (lots of) good comes from things that I and my friends are predisposed to be good at (eg cleverness, making money).
Another piece of evidence is that EAs seem far from immune from ideological capture for non-EA stuff. My go-to example is the SSC/NYT thing.
Speaking for myself, I think I have strong ideological reasons to think that predictably doing (lots of) good is possible.
I also have a bias towards believing that things that are good for X reason are also good for Y reason, and this problem rears itself up even when I try to correct for it. E.g. I think Linch(2012-2017) too easily bought in to the “additional consumption $s don’t make you happier ” narratives, and I’m currently lactovegetarian even though I started being vegetarian for very different reasons than what I currently believe to be the most important. I perceive other EAs as on average worse at this (not sure the right term. Decoupling?) than me, which is not necessarily true of the other biases on this list.
A specific instantiation of this is that it’s easier for me to generate solutions to problems that are morally unambiguous by the standards of non-EA Western morality, even though we’d expect the tails to come apart fairly often.
To a lesser extent, I have biases towards thinking that doing (lots of) good comes from things that I and my friends are predisposed to be good at (eg cleverness, making money).
Another piece of evidence is that EAs seem far from immune from ideological capture for non-EA stuff. My go-to example is the SSC/NYT thing.