I think you get a lot right, but some of these claims, especially the empirical ones, seem to apply only to certain (perhaps long-termist) segments only.
I’d agree on/focus on
Altruism, willingness to substantially give (money, time) from one’s own resources, and the goodness of this (but not necessary an ‘obligation’)
Utilitarianism/consequentialism
(Corollary): The importance of maximization and prioritization in making choices about doing good.
A wide moral circle
Truth-seeking and reasoning transparency
I think these four things are fairly universal and core among EA’s—longtermist and non, and it brings us together. I also suspect that what we learn about how to promote these things will transfer across the various cause areas and branches of EA.
It seems somewhat to disagree with the truth-seeking part. I would say “it is bad for our epistemic norms” … but I’m not sure I use that terminology correctly.
Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest… such as
We live in an “unusual” time in history
My impression was that even among longtermists the ‘hinge of history’ thing is greatly contested
Most humans in the world have net positive lives
Maybe now they do, but in future, I don’t think we can have great confidence. Also, the ‘most’ does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.
Sentience is not limited to humans/biological beings
Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I’m not sure how widespread this belief is. At least I don’t think there is any consensus that we ‘know of non-bios who are currently sentient’, nor do we have consensus that ‘there is a way to know what direction the valence of the non-bios goes’.
e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.
I’m not sure that’s been fully taken on board. In what ways? Are we prioritizing ‘create the maximum number of super-happy algorithms’? (Maybe I’m missing something though; this is a legit question.)
I think you get a lot right, but some of these claims, especially the empirical ones, seem to apply only to certain (perhaps long-termist) segments only.
I’d agree on/focus on
Altruism, willingness to substantially give (money, time) from one’s own resources, and the goodness of this (but not necessary an ‘obligation’)
Utilitarianism/consequentialism
(Corollary): The importance of maximization and prioritization in making choices about doing good.
A wide moral circle
Truth-seeking and reasoning transparency
I think these four things are fairly universal and core among EA’s—longtermist and non, and it brings us together. I also suspect that what we learn about how to promote these things will transfer across the various cause areas and branches of EA.
I sort of disagree-ing with us
‘Agreeing on a set of Facts’.
It seems somewhat to disagree with the truth-seeking part. I would say “it is bad for our epistemic norms” … but I’m not sure I use that terminology correctly.
Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest… such as
My impression was that even among longtermists the ‘hinge of history’ thing is greatly contested
Maybe now they do, but in future, I don’t think we can have great confidence. Also, the ‘most’ does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.
Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I’m not sure how widespread this belief is. At least I don’t think there is any consensus that we ‘know of non-bios who are currently sentient’, nor do we have consensus that ‘there is a way to know what direction the valence of the non-bios goes’.
I’m not sure that’s been fully taken on board. In what ways? Are we prioritizing ‘create the maximum number of super-happy algorithms’? (Maybe I’m missing something though; this is a legit question.)