1. Strong ideological reasons to believe in a pre-existing answer before searching further (consider mathematical modeling of climate change or coronavirus lockdowns vs pure mathematics) [...] Unfortunately, effective altruism is on the wrong side of all these criteria.
I’m curious what you think these strong ideological reasons are. My opinion is that EA is on the right side here on most questions. This is because in EA you get a lot of social status (and EA forum karma) for making good arguments against views that are commonly held in EA. I imagine that in most communities this is not the case. Maybe there is an incentive to think that a cause area or an intervention is promising if you want to (continue to) work within that cause area but anything you can challenge within a cause area or an intervention seems encouraged.
Speaking for myself, I think I have strong ideological reasons to think that predictably doing (lots of) good is possible.
I also have a bias towards believing that things that are good for X reason are also good for Y reason, and this problem rears itself up even when I try to correct for it. E.g. I think Linch(2012-2017) too easily bought in to the “additional consumption $s don’t make you happier ” narratives, and I’m currently lactovegetarian even though I started being vegetarian for very different reasons than what I currently believe to be the most important. I perceive other EAs as on average worse at this (not sure the right term. Decoupling?) than me, which is not necessarily true of the other biases on this list.
A specific instantiation of this is that it’s easier for me to generate solutions to problems that are morally unambiguous by the standards of non-EA Western morality, even though we’d expect the tails to come apart fairly often.
To a lesser extent, I have biases towards thinking that doing (lots of) good comes from things that I and my friends are predisposed to be good at (eg cleverness, making money).
Another piece of evidence is that EAs seem far from immune from ideological capture for non-EA stuff. My go-to example is the SSC/NYT thing.
Nice post! Regarding
I’m curious what you think these strong ideological reasons are. My opinion is that EA is on the right side here on most questions. This is because in EA you get a lot of social status (and EA forum karma) for making good arguments against views that are commonly held in EA. I imagine that in most communities this is not the case. Maybe there is an incentive to think that a cause area or an intervention is promising if you want to (continue to) work within that cause area but anything you can challenge within a cause area or an intervention seems encouraged.
Speaking for myself, I think I have strong ideological reasons to think that predictably doing (lots of) good is possible.
I also have a bias towards believing that things that are good for X reason are also good for Y reason, and this problem rears itself up even when I try to correct for it. E.g. I think Linch(2012-2017) too easily bought in to the “additional consumption $s don’t make you happier ” narratives, and I’m currently lactovegetarian even though I started being vegetarian for very different reasons than what I currently believe to be the most important. I perceive other EAs as on average worse at this (not sure the right term. Decoupling?) than me, which is not necessarily true of the other biases on this list.
A specific instantiation of this is that it’s easier for me to generate solutions to problems that are morally unambiguous by the standards of non-EA Western morality, even though we’d expect the tails to come apart fairly often.
To a lesser extent, I have biases towards thinking that doing (lots of) good comes from things that I and my friends are predisposed to be good at (eg cleverness, making money).
Another piece of evidence is that EAs seem far from immune from ideological capture for non-EA stuff. My go-to example is the SSC/NYT thing.