Phil Goetz wrote one of LessWrong’s most upvoted pieces, arguing that decompartmentalisation is often irrational, Reason as a Memetic Immune Disorder.
So why have people started off so in favour of compartmentalisation and now turned their positions around? If, like Dale, you’re really interested in this topic, it’s probably worthwhile to work through some of these original sources. But some of the general issues are that compartmentalising means treading an uncommon path, which means that you have fewer footsteps to follow in, which means that you’ll end up with weird and incorrect beliefs a lot of the time. It’s also very likely that decompartmentalising—letting some beliefs affect all areas of decision-making—will lead to fanaticism—overvaluing certain beliefs.
It’s kind-of a timeless question in rationality. But the growing consensus I’m discovering is that decompartmentalisation should be matched with what Anna calls ‘safety features’ - how to know when you have strange beliefs, and when to backtrack, when to think more, when to get friends to review your opinions, when to cede to the majority viewpoint, et cetera. Basically, I think that we need to work to put safety measures onto our thinking, and share these around the effective altruist community.
Yeah, I basically agree with this. The problem is that much of the EA movement relies on decompartmentalizing—Singer’s classic Drowning Child argument basically relies on us decompartmentalizing our views on local charity from those on third world charity.
Indeed, many people would argue that EAs are fanatical—imagine taking some weird philosophy so seriously you end up giving away millions of dollars! Or worse, giving up your fufilling career to become a miserable corporate lawyer!
My main thought on matter is that de-compartmentalising has its disadvantages as well, as has been discussed in depth at (of-course!) LessWrong.
First, here are some posts taking your old-school view that decompartmentalising is valuable: Compartmentalisation in epistemic and instrumental rationality, Taking Ideas Seriously. They give similar arguments to you but in both cases, the authors have since revised their opinions. For the first piece, Anna wrote a significant comment moderating her conclusions. For the latter piece, Will retracted it.
Phil Goetz wrote one of LessWrong’s most upvoted pieces, arguing that decompartmentalisation is often irrational, Reason as a Memetic Immune Disorder.
So why have people started off so in favour of compartmentalisation and now turned their positions around? If, like Dale, you’re really interested in this topic, it’s probably worthwhile to work through some of these original sources. But some of the general issues are that compartmentalising means treading an uncommon path, which means that you have fewer footsteps to follow in, which means that you’ll end up with weird and incorrect beliefs a lot of the time. It’s also very likely that decompartmentalising—letting some beliefs affect all areas of decision-making—will lead to fanaticism—overvaluing certain beliefs.
It’s kind-of a timeless question in rationality. But the growing consensus I’m discovering is that decompartmentalisation should be matched with what Anna calls ‘safety features’ - how to know when you have strange beliefs, and when to backtrack, when to think more, when to get friends to review your opinions, when to cede to the majority viewpoint, et cetera. Basically, I think that we need to work to put safety measures onto our thinking, and share these around the effective altruist community.
Other posts about problems with consequentialism and rationality include Virtue Ethics for Consequentialists and the less relevant but also interesting Ontological Crises though they’re less directly relevant.
Yeah, I basically agree with this. The problem is that much of the EA movement relies on decompartmentalizing—Singer’s classic Drowning Child argument basically relies on us decompartmentalizing our views on local charity from those on third world charity.
Indeed, many people would argue that EAs are fanatical—imagine taking some weird philosophy so seriously you end up giving away millions of dollars! Or worse, giving up your fufilling career to become a miserable corporate lawyer!