I’d be interested in an extended flowchart to prioritize among x-risks and s-risks, with questions like:
Do you believe that non-human animals will outnumber humans over the long-term future?
Do you believe it will be possible to create artificial sentient beings (e.g. software-based) with moral significance?
If yes: Do you believe that artificial sentient beings will outnumber biological humans/​non-human animals over the long-term future?
Do you believe that artificial general intelligence will be created in the next 50 years?
I’d be interested in an extended flowchart to prioritize among x-risks and s-risks, with questions like:
Do you believe that non-human animals will outnumber humans over the long-term future?
Do you believe it will be possible to create artificial sentient beings (e.g. software-based) with moral significance?
If yes: Do you believe that artificial sentient beings will outnumber biological humans/​non-human animals over the long-term future?
Do you believe that artificial general intelligence will be created in the next 50 years?