It seems somewhat to disagree with the truth-seeking part. I would say “it is bad for our epistemic norms” … but I’m not sure I use that terminology correctly.
Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest… such as
We live in an “unusual” time in history
My impression was that even among longtermists the ‘hinge of history’ thing is greatly contested
Most humans in the world have net positive lives
Maybe now they do, but in future, I don’t think we can have great confidence. Also, the ‘most’ does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.
Sentience is not limited to humans/biological beings
Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I’m not sure how widespread this belief is. At least I don’t think there is any consensus that we ‘know of non-bios who are currently sentient’, nor do we have consensus that ‘there is a way to know what direction the valence of the non-bios goes’.
e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.
I’m not sure that’s been fully taken on board. In what ways? Are we prioritizing ‘create the maximum number of super-happy algorithms’? (Maybe I’m missing something though; this is a legit question.)
I sort of disagree-ing with us
‘Agreeing on a set of Facts’.
It seems somewhat to disagree with the truth-seeking part. I would say “it is bad for our epistemic norms” … but I’m not sure I use that terminology correctly.
Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest… such as
My impression was that even among longtermists the ‘hinge of history’ thing is greatly contested
Maybe now they do, but in future, I don’t think we can have great confidence. Also, the ‘most’ does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.
Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I’m not sure how widespread this belief is. At least I don’t think there is any consensus that we ‘know of non-bios who are currently sentient’, nor do we have consensus that ‘there is a way to know what direction the valence of the non-bios goes’.
I’m not sure that’s been fully taken on board. In what ways? Are we prioritizing ‘create the maximum number of super-happy algorithms’? (Maybe I’m missing something though; this is a legit question.)