The consensus of most people is that conventional wisdom is pretty good when it comes to designing institutions; at least compared to what a first-principles-reasoner could come up with.
I think your characterization of conventional answers to practical sociological questions is much too charitable, and your conclusion (“there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA “leaders”, the EA Forum, or LessWrong”) is correspondingly much too strong.
Indeed, EA grew out of a recognition that many conventional answers to practical sociological questions are bad. Many of us were socialized to think our major life goals should include getting a lucrative job, buying a nice house in the suburbs, and leaving as much money as possible to our kids. Peter Singer very reasonably contested this by pointing out that the conventional wisdom is probably wrong here: the world is deeply unjust, most people and animals live very hard lives, and many of us can improve their lives while making minimal sacrifices ourselves. (And this is to say nothing about the really bad answers societies have developed for certain practical sociological questions: e.g., slavery; patrilineal inheritance; mass incarceration; factory farming.)
More generally, our social institutions are not designed to make sentient creatures’ lives go well; they are designed to, for instance, maximize profits for corporations. Amazon is not usually cited as an example of an actor using evidence and reason to do as much good as possible, and company solutions are not developed with this aim in mind. (Think of another practical solution Amazon workers came up with: peeing in bottles because they risked missing their performance targets if they used the bathroom.)
I agree with many of the critiques you make of EA here, and I agree that EA could be improved by adopting conventional wisdom on some of the issues you cite. But I would suggest that your examples are cherrypicked, and that “the rot” you refer to is at least as prevalent in the broader world as it is within EA. Just because EA errs on certain practical sociological questions (e.g., peer review; undervaluing experience) does not mean that conventional answers are systematically better.
Yeah, I strongly agree and endorse Michael’s post, but this line you’re drawing out is also where I struggle. Michael has made better progress on teasing out the boundaries of this line than I have, but I’m still unclear. Clearly there are cases where conventional wisdom is wrong—EA is predicated on these cases existing.
Michael is saying on questions of philosophy, we should not accept conventional wisdom, but on questions of sociology, we should. I agree with you that the distinction between sociological and philosophical are not quite clear. I think you’re example of “what should you do with your life” is a good example of where the boundaries blur.
Maybe, I think “sociological” is not quite the right framing, but something along the lines of “good governance.” The peer review point Michael brings up doesn’t fit into the dynamic. Even though I agree with him, I think “how much should I trust peer review” is an epistemic question, and epistemics does fall into the category where Michael thinks EAs might have an edge over conventional wisdom. That being said, even if I thought there was reason to distrust conventional wisdom on this point, I would still trust professional epistemic philosophers over the average EA here and I would find it hard to believe that professional epistemic philosophers think forums/blogs are more reliable than peer reviewed journals.
What “major life goals should include (emphasis added)” is not a sociological question. It is not a topic that a sociology department would study. See my comment that I agree “conventional wisdom is wrong” in dismissing the philosophy of effective altruism (including the work of Peter Singer). And my remark immediately thereafter: “Yes, these are philosophical positions, not sociological ones, so it is not so outrageous to have a group of philosophers and philosophically-minded college students outperform conventional wisdom by doing first-principles reasoning”.
I am not citing Amazon as an example of an actor using evidence and reason to do as much good as possible. I am citing it as an example of an organization that is effective at what it aims to do.
Maybe I’m just missing something, but I don’t get why EAs have enough standing in philosophy to dispute the experts, but not in sociology. I’m not sure I could reliably predict which other fields you think conventional wisdom is or isn’t adequate in.
In fields where it’s possible to make progress with first-principles arguments/armchair reasoning, I think smart non-experts stand a chance of outperforming. I don’t want to make strong claims about the likelihood of success here; I just want to say that it’s a live possibility. I am much more comfortable saying that outperforming conventional wisdom is extremely unlikely on topics where first-principles arguments/armchair reasoning are insufficient.
(As it happens, EAs aren’t really disputing the experts in philosophy, but that’s beside the point...)
So basically, just philosophy, math, and some very simple applied math (like, say, the exponential growth of an epidemic), but already that last example is quite shaky.
I think the crux of the disagreement is this: you can’t disentangle the practical sociological questions from the normative questions this easily. E.g., the practical solution to “how do we feed everyone” is “torture lots of animals” because our society cares too much about having cheap, tasty food and too little about animals’ suffering. The practical solution to “what do we do about crime” is “throw people in prison for absolutely trivial stuff” because our society cares too much about retribution and too little about the suffering of disadvantaged populations. And so on. Practical sociological solutions are always accompanied by normative baggage, and much of this normative baggage is bad.
EA wouldn’t be effective if it just made normative critiques (“the world is extremely unjust”) but didn’t generate its own practical solutions (“donate to GiveWell”). EA has more impact than most philosophy departments because it criticizes many conventional philosophical positions while also generating its own practical sociological solutions. This doesn’t mean all of those solutions are right—I agree that manyaren’t—but EA wouldn’t be EA if it didn’t challenge conventional sociological wisdom.
(Separately, I’d contest that this is not a topic of interest to sociologists. Most sociology PhD curricula devote substantial time to social theory, and a large portion of sociologists are critical theorists; i.e., they believe that “social problems stem more from social structures and cultural assumptions than from individuals… [social theory] argues that ideology is the principal obstacle to human liberation.”)
I think your characterization of conventional answers to practical sociological questions is much too charitable, and your conclusion (“there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA “leaders”, the EA Forum, or LessWrong”) is correspondingly much too strong.
Indeed, EA grew out of a recognition that many conventional answers to practical sociological questions are bad. Many of us were socialized to think our major life goals should include getting a lucrative job, buying a nice house in the suburbs, and leaving as much money as possible to our kids. Peter Singer very reasonably contested this by pointing out that the conventional wisdom is probably wrong here: the world is deeply unjust, most people and animals live very hard lives, and many of us can improve their lives while making minimal sacrifices ourselves. (And this is to say nothing about the really bad answers societies have developed for certain practical sociological questions: e.g., slavery; patrilineal inheritance; mass incarceration; factory farming.)
More generally, our social institutions are not designed to make sentient creatures’ lives go well; they are designed to, for instance, maximize profits for corporations. Amazon is not usually cited as an example of an actor using evidence and reason to do as much good as possible, and company solutions are not developed with this aim in mind. (Think of another practical solution Amazon workers came up with: peeing in bottles because they risked missing their performance targets if they used the bathroom.)
I agree with many of the critiques you make of EA here, and I agree that EA could be improved by adopting conventional wisdom on some of the issues you cite. But I would suggest that your examples are cherrypicked, and that “the rot” you refer to is at least as prevalent in the broader world as it is within EA. Just because EA errs on certain practical sociological questions (e.g., peer review; undervaluing experience) does not mean that conventional answers are systematically better.
Yeah, I strongly agree and endorse Michael’s post, but this line you’re drawing out is also where I struggle. Michael has made better progress on teasing out the boundaries of this line than I have, but I’m still unclear. Clearly there are cases where conventional wisdom is wrong—EA is predicated on these cases existing.
Michael is saying on questions of philosophy, we should not accept conventional wisdom, but on questions of sociology, we should. I agree with you that the distinction between sociological and philosophical are not quite clear. I think you’re example of “what should you do with your life” is a good example of where the boundaries blur.
Maybe, I think “sociological” is not quite the right framing, but something along the lines of “good governance.” The peer review point Michael brings up doesn’t fit into the dynamic. Even though I agree with him, I think “how much should I trust peer review” is an epistemic question, and epistemics does fall into the category where Michael thinks EAs might have an edge over conventional wisdom. That being said, even if I thought there was reason to distrust conventional wisdom on this point, I would still trust professional epistemic philosophers over the average EA here and I would find it hard to believe that professional epistemic philosophers think forums/blogs are more reliable than peer reviewed journals.
What “major life goals should include (emphasis added)” is not a sociological question. It is not a topic that a sociology department would study. See my comment that I agree “conventional wisdom is wrong” in dismissing the philosophy of effective altruism (including the work of Peter Singer). And my remark immediately thereafter: “Yes, these are philosophical positions, not sociological ones, so it is not so outrageous to have a group of philosophers and philosophically-minded college students outperform conventional wisdom by doing first-principles reasoning”.
I am not citing Amazon as an example of an actor using evidence and reason to do as much good as possible. I am citing it as an example of an organization that is effective at what it aims to do.
Maybe I’m just missing something, but I don’t get why EAs have enough standing in philosophy to dispute the experts, but not in sociology. I’m not sure I could reliably predict which other fields you think conventional wisdom is or isn’t adequate in.
In fields where it’s possible to make progress with first-principles arguments/armchair reasoning, I think smart non-experts stand a chance of outperforming. I don’t want to make strong claims about the likelihood of success here; I just want to say that it’s a live possibility. I am much more comfortable saying that outperforming conventional wisdom is extremely unlikely on topics where first-principles arguments/armchair reasoning are insufficient.
(As it happens, EAs aren’t really disputing the experts in philosophy, but that’s beside the point...)
So basically, just philosophy, math, and some very simple applied math (like, say, the exponential growth of an epidemic), but already that last example is quite shaky.
I think the crux of the disagreement is this: you can’t disentangle the practical sociological questions from the normative questions this easily. E.g., the practical solution to “how do we feed everyone” is “torture lots of animals” because our society cares too much about having cheap, tasty food and too little about animals’ suffering. The practical solution to “what do we do about crime” is “throw people in prison for absolutely trivial stuff” because our society cares too much about retribution and too little about the suffering of disadvantaged populations. And so on. Practical sociological solutions are always accompanied by normative baggage, and much of this normative baggage is bad.
EA wouldn’t be effective if it just made normative critiques (“the world is extremely unjust”) but didn’t generate its own practical solutions (“donate to GiveWell”). EA has more impact than most philosophy departments because it criticizes many conventional philosophical positions while also generating its own practical sociological solutions. This doesn’t mean all of those solutions are right—I agree that many aren’t—but EA wouldn’t be EA if it didn’t challenge conventional sociological wisdom.
(Separately, I’d contest that this is not a topic of interest to sociologists. Most sociology PhD curricula devote substantial time to social theory, and a large portion of sociologists are critical theorists; i.e., they believe that “social problems stem more from social structures and cultural assumptions than from individuals… [social theory] argues that ideology is the principal obstacle to human liberation.”)