Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it’s just that everyone would already be doing EA, so we wouldn’t need a new movement to do it, and people wouldn’t increase their impact by learning about EA. I’m unsure about how best to handle this in the argument.
I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it’s just that everyone would already be doing EA
Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.
One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]
To the non-EA reader, it likely wouldn’t seem too unlikely that the kinds of things they typically do are actually high impact. So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.
[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.
[^2] As noted, I don’t think you actually do think that EAs should assume this, but labelling it as a “premise” in the “rigorous argument for EA” certainly risks giving that impression.
Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it’s just that everyone would already be doing EA, so we wouldn’t need a new movement to do it, and people wouldn’t increase their impact by learning about EA. I’m unsure about how best to handle this in the argument.
Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.
One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]
To the non-EA reader, it likely wouldn’t seem too unlikely that the kinds of things they typically do are actually high impact. So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.
[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.
[^2] As noted, I don’t think you actually do think that EAs should assume this, but labelling it as a “premise” in the “rigorous argument for EA” certainly risks giving that impression.