how is one to tradeoff between existential safety (for humans) and suffering risks (for all minds) [...] what are the odds that thereās no way for humanity-preservers and suffering-reducers to get along?
It seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isnāt just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes.)
I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beingsā¦) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art).
(There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)
It seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isnāt just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes.)
I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beingsā¦) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art).
(There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)