I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.
Thank you. If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all ‘effective altruists’, than my objections are pretty much moot. I really want to reiterate how much I think EA would gain by staying away from most political issues.
Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it’s not super important to include them at all, but at the same time they will not change many of the outcomes.
What is the benefit of including them? Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates? ATM, I would suggest ranking those politicians whould would get shuffled equally, and let the reader decide for themselves (or flip a coin).
Then they can be marketed differently and people can choose which one to look at.
This seems like a very bad idea. This is similar to newspapers proporting to be the source of “factual information” selling different versions of articles based on readers’ points of view. There is one objective reality and our goal should be to get our understanding as close to it as possible. Again, I would suggest to instead set a goal of making a model which is
1.) Robust to new evidence
2.) Robust to different points of view
This will require some tradeoffs (or maybe is not possible at all), but only then can you get rid of the cognitive dissonance in the second paragraph of your report and confidently say “If you use the model correctly, and one politician scores better than another, then he/she is better full stop”.
Thank you for the post. I agree with you insofar as Ai as an x-risk is concerned, especially in the near future timespan, where we are much more likely to be eradicated by more ‘banal’ means. However, just for emphasis: this does not mean that there are no risks related to AI safety. AGI is very likely far away, but even machine learning algorithms may become quite a powerful weapon when misused.