If AGI goes well for humans, it’ll probably [i.e. ≥70%] go well for animals
I think it doesn’t really make sense to do a sliding-scale vote on ≥70%. If my credence on (AGI goes well for animals | AGI goes well for humans) is 60%, then I’m just a no; if my credence is 80%, then I’m just a yes.
One way people could interpret the sliding-scale is expressing their confidence/stability in their judgment about whether the probability is greater or less than 70%. But that’s somewhat deranged and it’s not clear how to make it precise and I think everyone will just be confused.
It would be totally reasonable to vote on a scale from 0% to 100% for P(AGI goes well for animals | AGI goes well for humans), rather than voting from fully-disagree to fully-agree for ≥70%. Obviously that requires a little recoding. But making the voting scale more flexible, rather than just from fully-disagree to fully-agree, will benefit other debates too.
I think it doesn’t really make sense to do a sliding-scale vote on ≥70%. If my credence on (AGI goes well for animals | AGI goes well for humans) is 60%, then I’m just a no; if my credence is 80%, then I’m just a yes.
One way people could interpret the sliding-scale is expressing their confidence/stability in their judgment about whether the probability is greater or less than 70%. But that’s somewhat deranged and it’s not clear how to make it precise and I think everyone will just be confused.
It would be totally reasonable to vote on a scale from 0% to 100% for P(AGI goes well for animals | AGI goes well for humans), rather than voting from fully-disagree to fully-agree for ≥70%. Obviously that requires a little recoding. But making the voting scale more flexible, rather than just from fully-disagree to fully-agree, will benefit other debates too.