I don’t think that the chance of the election hinging on a single vote is the right thing to look at. One should decide based on the fact that other people similar to them are likely to act similarly. E.g. a person reading this post might decide whether to vote by asking themselves whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting. (Of course, this reasoning neglects a much larger group of people that are also correlated with them.)
The costs are (in expectation) proportional to the benefits, so I think even under EDT or FDT it mostly just adds up to normality. For altruists at least.
It seems like to figure out whether it’s a good use of time for 300 people like you to vote, you still need to figure out if it’s worth it for any single of them.
What I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don’t vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they’re not deciding just for themselves.
whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting
To decide whether they want this, shouldn’t they look at the chances? How would this change the answer? The risk to the EA community increases nonlinearly (since the EA community’s marginal returns on additional members aren’t constant) while the benefits of additional votes increase roughly linearly?
Also, there are mail-in ballots, although it might be too late in some places (I’m not informed either way, so don’t take my word for it).
This sounds like the evidential decision theory answer, and I’m not that familiar with these different decision theories. However, your decision to vote doesn’t cause these others to vote, it’s only evidence that they are likely to act similarly, right? Finding that out one way or another doesn’t actually make the world better or worse (compared to alternatives), it just clears up some uncertainty you had about what the world would look like. Otherwise, couldn’t you justify confirmation bias, e.g. telling your friends to selectively only share good news with you?
What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don’t seem to apply here. When you decide whether to vote you don’t decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it’s good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.
If I’m in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?
It doesn’t affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.
Imagine that while you’re in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you’re not deciding just for yourself. When we’re talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.
I completely agree with you. This whole reasoning seems to heavily depend on using causal decision theory instead of its (in my opinion) more sensible competitors.
I don’t think that the chance of the election hinging on a single vote is the right thing to look at. One should decide based on the fact that other people similar to them are likely to act similarly. E.g. a person reading this post might decide whether to vote by asking themselves whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting. (Of course, this reasoning neglects a much larger group of people that are also correlated with them.)
The costs are (in expectation) proportional to the benefits, so I think even under EDT or FDT it mostly just adds up to normality. For altruists at least.
When one assumes that the number of people that are similar to them (roughly speaking) is sufficiently small, I agree.
The costs are higher for people who value the time of people that are correlated with them, while the benefits are not.
It seems like to figure out whether it’s a good use of time for 300 people like you to vote, you still need to figure out if it’s worth it for any single of them.
What I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don’t vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they’re not deciding just for themselves.
To decide whether they want this, shouldn’t they look at the chances? How would this change the answer? The risk to the EA community increases nonlinearly (since the EA community’s marginal returns on additional members aren’t constant) while the benefits of additional votes increase roughly linearly?
Also, there are mail-in ballots, although it might be too late in some places (I’m not informed either way, so don’t take my word for it).
This sounds like the evidential decision theory answer, and I’m not that familiar with these different decision theories. However, your decision to vote doesn’t cause these others to vote, it’s only evidence that they are likely to act similarly, right? Finding that out one way or another doesn’t actually make the world better or worse (compared to alternatives), it just clears up some uncertainty you had about what the world would look like. Otherwise, couldn’t you justify confirmation bias, e.g. telling your friends to selectively only share good news with you?
What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don’t seem to apply here. When you decide whether to vote you don’t decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it’s good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.
What does this mean? If I’m in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?
It doesn’t affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.
Imagine that while you’re in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you’re not deciding just for yourself. When we’re talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.
I don’t get it.
Wikipedia’s entry on superrationality probably explains the main idea here better than me.
I completely agree with you. This whole reasoning seems to heavily depend on using causal decision theory instead of its (in my opinion) more sensible competitors.