I agree there’s a difficulty in finding a theoretical justification for how inclusive you are. I think this overcooks the problem somewhat as an easier practical principle would be “be so inclusive no one feels their initially preferred theory isn’t represented”. You could swap “no one” for “few people” with “few” to be further defined. There doesn’t seem much point saying “this is what a white supremacist would think” as there aren’t that many floating around EA, for whatever reason.
On your suggestions for being inclusive, I’m not sure the first two are so necessary simply because it’s not clear what types of EA actions prioritarians and deontologists will disagree about in practice. For which charities will utils and prioritarians diverge, for instance?
On the third, I think we already do that, don’t we? We already have lots of human-focused causes people can pick if they aren’t concerned about non-human animals.
On the last, the only view I can think of which puts no value on the future would be one with a very high pure time discount. I’m inclined towards person-affecting views and I think climate change (and X-risk) would be bad and are worth worrying about: they could impact the lives of those alive today. As I said to B. Todd earlier, I just don’t think they swamp the analysis.
Thanks Michelle.
I agree there’s a difficulty in finding a theoretical justification for how inclusive you are. I think this overcooks the problem somewhat as an easier practical principle would be “be so inclusive no one feels their initially preferred theory isn’t represented”. You could swap “no one” for “few people” with “few” to be further defined. There doesn’t seem much point saying “this is what a white supremacist would think” as there aren’t that many floating around EA, for whatever reason.
On your suggestions for being inclusive, I’m not sure the first two are so necessary simply because it’s not clear what types of EA actions prioritarians and deontologists will disagree about in practice. For which charities will utils and prioritarians diverge, for instance?
On the third, I think we already do that, don’t we? We already have lots of human-focused causes people can pick if they aren’t concerned about non-human animals.
On the last, the only view I can think of which puts no value on the future would be one with a very high pure time discount. I’m inclined towards person-affecting views and I think climate change (and X-risk) would be bad and are worth worrying about: they could impact the lives of those alive today. As I said to B. Todd earlier, I just don’t think they swamp the analysis.