Thank you for the post. I agree with you insofar as Ai as an x-risk is concerned, especially in the near future timespan, where we are much more likely to be eradicated by more ‘banal’ means. However, just for emphasis: this does not mean that there are no risks related to AI safety. AGI is very likely far away, but even machine learning algorithms may become quite a powerful weapon when misused.
mirgee
I will make it reasonably clear in the proper reports that this is An EA Project rather than The EA Position.
Thank you. If you fully clarify that this is a project of someone who identifies as an effective altruist, and your position may or may not be shared by all ‘effective altruists’, than my objections are pretty much moot. I really want to reiterate how much I think EA would gain by staying away from most political issues.
Almost by definition, the issues that are distanced from EA will tend to get less weight. So, it’s not super important to include them at all, but at the same time they will not change many of the outcomes.
What is the benefit of including them? Does the benefit outweigh the cost of potentially unnecessarily shuffling some candidates? ATM, I would suggest ranking those politicians whould would get shuffled equally, and let the reader decide for themselves (or flip a coin).
Then they can be marketed differently and people can choose which one to look at.
This seems like a very bad idea. This is similar to newspapers proporting to be the source of “factual information” selling different versions of articles based on readers’ points of view. There is one objective reality and our goal should be to get our understanding as close to it as possible. Again, I would suggest to instead set a goal of making a model which is
1.) Robust to new evidence
2.) Robust to different points of view
This will require some tradeoffs (or maybe is not possible at all), but only then can you get rid of the cognitive dissonance in the second paragraph of your report and confidently say “If you use the model correctly, and one politician scores better than another, then he/she is better full stop”.
Apologies for not being clear enough, I am suggesting the first, and part of the second, i.e. removing issues not related to EA. It is fine to discuss the best available evidence on “not well studied topics”, but I don’t think it’s advisable to give “official EA position” on those.
In addition, my first point is questioning the idea of ranking politicians based on the views they claim or seem to hold because of how unpredictable the actual actions are regardless of what they say. I believe EA should stick to spreading the message that each individual can make the world a better place through altruism based on reason and evidence, and that we should trust no politician or anybody else to do it for us.
I would really like to support the idea of keeping EA’s focus on issues and solutions. Predicting effects of (an altruistic) action is difficult in this complex world, but still easier than predicting action of another person, and even more more a politician in the current system of irrational agents in political games with incomplete imperfect information (and lack of accountability). We may rank candidates at least roughly according to what they say they intend to do, but this estimate is weighed by so much error to be hardly valuable. Supporting the intentions themselves of course makes sense in cases with hard, relatively long-term empirical evidence of enhancing general well-being, such as free trade.
Moreover, we may want to at least consider the effect supporting politicians expressing controversial opinions on issues unrelated to the values and causes of EA, especially on the basis of highly subjective and fallible ranking systems. Personally, I came to EA in part because I love how (mostly) apolitical this community is, and maybe I am not alone.
That’s not what I’m saying at all. How is a suggestion to include a disclaimer an objection about methodology? It is not that unclear. Am I being read?
What is the methodology for determining the weights?
Because leaving decision to chance in face of uncertainty may sometimes be a good strategy? And I suggest leaving things to the reader’s judgement when there is still considerable uncertainty / insufficient evidence for taking any position? Am I being considered at all, or have you just decided to take a hostile position for some reason...?
I agree.
Again, nowhere have I expressed that wish.
I agree.
That is a vague statement which I didn’t make.
Again, never said that. That probably refers to my first post where I was talking about general EA position, which is moot once you include a disclaimer.
However, I apologize for not taking the time to do at least some research before making a comment. I am not versed in political science at all. Your model (or its future version) may be very well justifiable. I have some experience in game theory which maybe made me biased to see the problem more complicated than it is at first, and, even more importantly, I also have a truckload of other biases I should try to become more aware of. For example, I thought that if you take a random politician pre-election promise on a random topic, that it is likely going to be left unadressed or broken given they are elected, due to lack of accountability and attempt to appeal (I know some examples of which that happened in the past, which of course doesn’t at all mean it’s likely). A quick research showed this was probably wrong, so again, I apologize.
I will do some research and thinking when I have time and come back when I have some (hopefully) more informed ideas, and definitely do that in the future. I don’t retract the objections which don’t rely on unpredictability of decisions, however.