1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
This is awesome and I’ve been wanting something like it but am too lazy to create it myself. So I’m really glad kbog did.
I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.
I think AI safety and biorisk deserve some weighting here even if candidates aren’t addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people’s pet causes. They’ll say that EA is ignoring this or that problem and make a stink out of it.
There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it’s all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.
Still I see what you mean, I will keep this as a broad report but when it’s done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
Epistemic modesty justifies convergence of opinions.
If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it’s irrational for people to hold all-things-considered opinions to one side or the other.
If it’s not clear which policy is better than another, we can say “there is not enough evidence to make a judgement”, and leave it unscored.
So yes there is a point where I sort of say “you are scientifically wrong, this is the rational position,” but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).
Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.
Really cool idea! Two possibilities:
1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate’s health has nothing to do with the fact that you’re an EA; they’d be just as good listening to any other trusted pundit. I’m not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.
2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree—they disagree for empirical reasons. If you stick to something where it’s mostly a values question, people might trust your judgements more.
This is awesome and I’ve been wanting something like it but am too lazy to create it myself. So I’m really glad kbog did.
I vote for continuing to include weightings for e.g. candidate health. The interesting question is who is actually likely to do the most good, not who believes the best things. So to model that well you need to capture any personal factors that significantly affect their probability of carrying out their agenda.
I think AI safety and biorisk deserve some weighting here even if candidates aren’t addressing them directly. You could use proxy issues that the candidates are more likely to have records on and that relevant experts have a consensus are helpful or unhelpful (e.g. actions likely to lead to an arms race with China). And then adjust for uncertainty by giving them a somewhat lower weight than you would give a direct vote on something like creating an unfriendly AI.
It’s worth a shot, although long run cooperation / arms races seems like one of the toughest topics to tackle (due to the inherent complexity of international relations). We should start by looking through x-risk reading lists to collect the policy arguments, then see if there is a robust enough base of ideas to motivate frequent judgements about current policy.
I think it would be hard to keep things tight around traditional EA issues because then we would get attacked for ignoring some people’s pet causes. They’ll say that EA is ignoring this or that problem and make a stink out of it.
There are some things that we could easily exclude (like health) but then it would just be a bit less accurate while still having enough breadth to include stances on common controversial topics. The value of this system over other pundits is that it’s all-encompassing in a more formal way, and of course more accurate. The weighting of issues on the basis of total welfare is very different from how other people do it.
Still I see what you mean, I will keep this as a broad report but when it’s done I can easily cut out a separate version that just narrows things down to main EA topics. Also, I can raise the minimum weight for issue inclusion above 0.01, to keep the model simpler and more focused on big EA stuff (while not really changing the outcomes).
Epistemic modesty justifies convergence of opinions.
If there is empirical disagreement that cannot be solved with due diligence looking into the issue, then it’s irrational for people to hold all-things-considered opinions to one side or the other.
If it’s not clear which policy is better than another, we can say “there is not enough evidence to make a judgement”, and leave it unscored.
So yes there is a point where I sort of say “you are scientifically wrong, this is the rational position,” but only in cases where there is the clear logic and validated expert opinion to back it up to the point of agreement among good people. People already do this with many issues (consider climate change for instance, where the scientific consensus is frequently treated as an objective fact by liberal institutions and outlets, despite empirical disagreement among many conservatives).
Obviously right now the opinions and arguments are somewhat rough, but they will be more complete in later versions.