I have one concern about this which might reduce estimates of its impact. Perhaps I’m not really understanding it, and perhaps you can allay my concerns.
First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.
But if we grant that we did indeed pick the best candidate, there doesn’t seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game where supporters of candidate A are vote swapping as much as supporters of candidate B. So on the margin, engaging in vote swapping seems obviously good, but at a system level, promoting vote swapping seems less obviously good.
I generally like arguments from humility, but I think you’re overstating the difficulty of choosing the better candidate. E.g. in 2016 only one candidate had any sort of policy at all about farmed animals, so it didn’t require a very extensive policy analysis to figure out who is preferable. The same is true for other EA focus areas.
I agree. I do not think that promoting vote pairing irrespective of the candidates is a very useful thing to do.
2016 only one candidate had any sort of policy at all about farmed animals, so it didn’t require a very extensive policy analysis to figure out who is preferable.
Beware of unintended consequences, though. The path from “Nice things are written about X on a candidate’s promotional materials” to “Overall, X improved” is a very circuitous one in human politics.
The same is true for other EA focus areas.
A lot of people in EA seem to assume, without a thorough argument, that direct support for certain political tribes is good for all EA causes. I would like to see some effort put into something like a quasi realistic simulation of human political processes to back up claims like this. (Not that I am demanding specific evidence before I will believe these claims—just that it would be a good idea). Real-world human politicking seems to be full of crucial considerations.
I also feel like when we talk about human political issues, we lack an understanding of, or don’t bother to think about, the causal dynamics behind how politics works in humans. I am specifically talking about things like signalling
In order to think vote trading is a good idea, you have to think that, with some reasonable amount of work, you can predict the better candidate at a rate which outperforms chance.
Humility is important, but there’s a difference between “politics is hard to predict perfectly” and “politics is impossible predict at all”.
there’s a difference between “politics is hard to predict perfectly” and “politics is impossible predict at all”.
I think there’s a lot of improvement to be had in the area of “refining which direction we are pushing in”.
Was there ever a well-prosecuted debate about whether EA should support Clinton over Trump, or did we just sort of stumble into it because the correct side is so obvious?
I have one concern about this which might reduce estimates of its impact. Perhaps I’m not really understanding it, and perhaps you can allay my concerns.
First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.
But if we grant that we did indeed pick the best candidate, there doesn’t seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game where supporters of candidate A are vote swapping as much as supporters of candidate B. So on the margin, engaging in vote swapping seems obviously good, but at a system level, promoting vote swapping seems less obviously good.
Does this make any sense?
Thanks for the feedback!
I generally like arguments from humility, but I think you’re overstating the difficulty of choosing the better candidate. E.g. in 2016 only one candidate had any sort of policy at all about farmed animals, so it didn’t require a very extensive policy analysis to figure out who is preferable. The same is true for other EA focus areas.
I agree. I do not think that promoting vote pairing irrespective of the candidates is a very useful thing to do.
Beware of unintended consequences, though. The path from “Nice things are written about X on a candidate’s promotional materials” to “Overall, X improved” is a very circuitous one in human politics.
A lot of people in EA seem to assume, without a thorough argument, that direct support for certain political tribes is good for all EA causes. I would like to see some effort put into something like a quasi realistic simulation of human political processes to back up claims like this. (Not that I am demanding specific evidence before I will believe these claims—just that it would be a good idea). Real-world human politicking seems to be full of crucial considerations.
I also feel like when we talk about human political issues, we lack an understanding of, or don’t bother to think about, the causal dynamics behind how politics works in humans. I am specifically talking about things like signalling
In order to think vote trading is a good idea, you have to think that, with some reasonable amount of work, you can predict the better candidate at a rate which outperforms chance.
Humility is important, but there’s a difference between “politics is hard to predict perfectly” and “politics is impossible predict at all”.
I think there’s a lot of improvement to be had in the area of “refining which direction we are pushing in”.
Was there ever a well-prosecuted debate about whether EA should support Clinton over Trump, or did we just sort of stumble into it because the correct side is so obvious?