This is awesome and I’ve been wanting something like it but am too lazy to create it myself. So I’m really glad kbog did.
I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.
I think AI safety and biorisk deserve some weighting here even if candidates aren’t addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.
This is awesome and I’ve been wanting something like it but am too lazy to create it myself. So I’m really glad kbog did.
I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.
I think AI safety and biorisk deserve some weighting here even if candidates aren’t addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.