Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my “fairly strongly agree” answer.
Thanks—yep I think this is becoming a bit of an issue (it came up a couple times in the symposium as well). I might edit the footnote to clarify—worlds with morally valuable digital minds should be included as a non-extinction scenario, but worlds where an AI which could be called “intelligent life” but isn’t conscious/ morally valuable takes over and humans become extinct should count as an extinction scenario.
Ughh … baking judgements about what’s morally valuable into the question somehow doesn’t seem ideal. Like I think it’s an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction.
Also like: what if you have a world which is like the one you describe as an extinction scenario, but there’s a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario?
I’d kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.
Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my “fairly strongly agree” answer.
Thanks—yep I think this is becoming a bit of an issue (it came up a couple times in the symposium as well). I might edit the footnote to clarify—worlds with morally valuable digital minds should be included as a non-extinction scenario, but worlds where an AI which could be called “intelligent life” but isn’t conscious/ morally valuable takes over and humans become extinct should count as an extinction scenario.
Ughh … baking judgements about what’s morally valuable into the question somehow doesn’t seem ideal. Like I think it’s an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction.
Also like: what if you have a world which is like the one you describe as an extinction scenario, but there’s a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario?
I’d kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.