Passage 5 seems to prove too much, in the sense of “If you take X philosophy literally, it becomes bad for you” being applicable to most philosophies, but I very much like Passage 4, the EA judo one.
While it is very much true that disagreeing over the object-level causes shouldn’t disqualify one from EA, I do agree that it is not completely separate from EA—that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn’t jump too far ahead of the question part is important.
“Again: if your social movement “works in principle” but practical implementation has too many problems, then it’s not really working in principle, either. The quality “we are able to do this effectively in practice” is an important (implicit) in-principle quality.”
I think this is a very key thing that many movements, including EA, should keep in mind. I think that what EA should be aiming for is “EA has some very good answers to the question of how we can do the most good, and we think they’re the best answers humanity has yet come up with to answer the question. That’s different from thinking our answers are objectively true, or that we have all the best answers and there are none left to find.” We can have the humility to question ourselves, but still have the confidence to suggest our answers are good ones.
I dream of a world where EA is to doing good as science is to human knowledge. Science isn’t always right, and science has been proven wrong again and again in the past, but science is collectively humanity’s best guess. I would like for EA to be humanity’s best guess at how to do the most good. EA is very young compared to science, so I’m not surprised we don’t have that same level of mastery over our field as science does, but I think that’s the target.
Thank you Jay, this is such a great response, I especially liked this paragraph:
While it is very much true that disagreeing over the object-level causes shouldn’t disqualify one from EA, I do agree that it is not completely separate from EA—that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn’t jump too far ahead of the question part is important.
Also, to me I think EA is essentially about applying the scientific revolution to the realm of doing good (as a branch of science of sorts).
Passage 5 seems to prove too much, in the sense of “If you take X philosophy literally, it becomes bad for you” being applicable to most philosophies, but I very much like Passage 4, the EA judo one.
While it is very much true that disagreeing over the object-level causes shouldn’t disqualify one from EA, I do agree that it is not completely separate from EA—that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn’t jump too far ahead of the question part is important.
“Again: if your social movement “works in principle” but practical implementation has too many problems, then it’s not really working in principle, either. The quality “we are able to do this effectively in practice” is an important (implicit) in-principle quality.”
I think this is a very key thing that many movements, including EA, should keep in mind. I think that what EA should be aiming for is “EA has some very good answers to the question of how we can do the most good, and we think they’re the best answers humanity has yet come up with to answer the question. That’s different from thinking our answers are objectively true, or that we have all the best answers and there are none left to find.” We can have the humility to question ourselves, but still have the confidence to suggest our answers are good ones.
I dream of a world where EA is to doing good as science is to human knowledge. Science isn’t always right, and science has been proven wrong again and again in the past, but science is collectively humanity’s best guess. I would like for EA to be humanity’s best guess at how to do the most good. EA is very young compared to science, so I’m not surprised we don’t have that same level of mastery over our field as science does, but I think that’s the target.
Thank you Jay, this is such a great response, I especially liked this paragraph:
Also, to me I think EA is essentially about applying the scientific revolution to the realm of doing good (as a branch of science of sorts).