I think the challenge with a project like this is that it is not ‘neutral’ in the way most EA causes are.
Most EA causes I can think of are focused on some version of saving lives or reducing suffering. Although there may be disagreement about how to best save lives or reduce suffering (and what things suffer), there is almost no disagreement that we should save lives and reduce suffering. Although this is not a philosophically neutral position, it’s ‘neutral’ in that you will find a vanishingly small number of people who disagree with the goal of saving lives and reducing suffering.
To put it another way, it’s ‘neutral’ because everyone values saving lives and reducing suffering so everyone feels like EA promotes their values.
Specific books, unless they are complete milk-toast, are not neutral in this way and implicitly promote particular ideas. Much of introductory EA literature, if nothing else, assumes positive act utilitarianism (although within the community there are many notable voices opposed to this position). And if we move away from EA books to other books we think are valuable, they are also going to drift further from ‘neutral’ values everyone can get behind.
This is not necessarily bad, but it is a project that doesn’t seem to fit well to me with much of the EA brand because whatever impact it has will have to be measured in terms of values not everyone agrees with.
For example, lots of people in the comments list HPMOR, The Sequences, or GEB. I like all of these a lot and would like to see more people read them, but that’s because I value the ideas and behaviors they encourage. You don’t have to look very far in EA though to find people who don’t agree with the rationalist project and wouldn’t like to see money spent on sending people copies of these books.
In a position like that, how do you rate the effectiveness of such a project? The impact will be measured in terms of value transmission around values that not everyone agrees on spreading. Unless you limit yourself to books that just promote the idea that we can save lives, reduce suffering, and be a little smarter about how we go about that, I think you’ll necessarily attract a lot of controversy in terms of evaluation.
I’m not saying I’m not in favor of people taking on projects like this. I just want to make sure we’re aware it’s not a normal EA project because the immediate outcome seems to be idea transmission and it’s going to be hard to evaluate what ideas are even worth spreading.
I think the challenge with a project like this is that it is not ‘neutral’ in the way most EA causes are.
Most EA causes I can think of are focused on some version of saving lives or reducing suffering. Although there may be disagreement about how to best save lives or reduce suffering (and what things suffer), there is almost no disagreement that we should save lives and reduce suffering. Although this is not a philosophically neutral position, it’s ‘neutral’ in that you will find a vanishingly small number of people who disagree with the goal of saving lives and reducing suffering.
To put it another way, it’s ‘neutral’ because everyone values saving lives and reducing suffering so everyone feels like EA promotes their values.
Specific books, unless they are complete milk-toast, are not neutral in this way and implicitly promote particular ideas. Much of introductory EA literature, if nothing else, assumes positive act utilitarianism (although within the community there are many notable voices opposed to this position). And if we move away from EA books to other books we think are valuable, they are also going to drift further from ‘neutral’ values everyone can get behind.
This is not necessarily bad, but it is a project that doesn’t seem to fit well to me with much of the EA brand because whatever impact it has will have to be measured in terms of values not everyone agrees with.
For example, lots of people in the comments list HPMOR, The Sequences, or GEB. I like all of these a lot and would like to see more people read them, but that’s because I value the ideas and behaviors they encourage. You don’t have to look very far in EA though to find people who don’t agree with the rationalist project and wouldn’t like to see money spent on sending people copies of these books.
In a position like that, how do you rate the effectiveness of such a project? The impact will be measured in terms of value transmission around values that not everyone agrees on spreading. Unless you limit yourself to books that just promote the idea that we can save lives, reduce suffering, and be a little smarter about how we go about that, I think you’ll necessarily attract a lot of controversy in terms of evaluation.
I’m not saying I’m not in favor of people taking on projects like this. I just want to make sure we’re aware it’s not a normal EA project because the immediate outcome seems to be idea transmission and it’s going to be hard to evaluate what ideas are even worth spreading.