Ughh … baking judgements about what’s morally valuable into the question somehow doesn’t seem ideal. Like I think it’s an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction.
Also like: what if you have a world which is like the one you describe as an extinction scenario, but there’s a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario?
I’d kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.
Ughh … baking judgements about what’s morally valuable into the question somehow doesn’t seem ideal. Like I think it’s an OK way to go for moral ~realists, but among anti-realists you might have people persistently disagreeing about what counts as extinction.
Also like: what if you have a world which is like the one you describe as an extinction scenario, but there’s a small amount of moral value in some subcomponent of that AI system. Does that mean it no longer counts as an extinction scenario?
I’d kind of propose instead using the typology Will proposed here, and making the debate between (1) + (4) on the one hand vs (2) + (3) on the other.