Following Eliezer, I think of an AGI as “safe” if deploying it carries no more than a 50% chance of killing more than a billion people
Is this 50% from a the point of view of some hypothetical person who knows as much as is practical about this AGI’s consequences, or from your point of view or something else?
Do you imagine that deploying two such AGIs in parallel universes with some minor random differences has only a 25% chance of them both killing more than a billion people?
Is this 50% from a the point of view of some hypothetical person who knows as much as is practical about this AGI’s consequences, or from your point of view or something else?
Do you imagine that deploying two such AGIs in parallel universes with some minor random differences has only a 25% chance of them both killing more than a billion people?