My very tentative view is that we’re sufficiently clueless about the probability distribution of possible outcomes from “Risks posed by artificial intelligence” and other x-risks, that the ratio between [the value one places on creating a happy person] and [the value one places on helping a person who is created without intervention] should have little influence on the prioritization of avoiding existential catastrophes.
I would guess that extinction would have more permanent and farther reaching effects than the other outcomes in existential catastrophes, especially if the population were expected to grow otherwise, so with a symmetric view, extinction could look much worse than the rest of the distribution (conditioning on extinction, and conditioning on existential risk not causing extinction).
My very tentative view is that we’re sufficiently clueless about the probability distribution of possible outcomes from “Risks posed by artificial intelligence” and other x-risks, that the ratio between [the value one places on creating a happy person] and [the value one places on helping a person who is created without intervention] should have little influence on the prioritization of avoiding existential catastrophes.
I would guess that extinction would have more permanent and farther reaching effects than the other outcomes in existential catastrophes, especially if the population were expected to grow otherwise, so with a symmetric view, extinction could look much worse than the rest of the distribution (conditioning on extinction, and conditioning on existential risk not causing extinction).