whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting
To decide whether they want this, shouldn’t they look at the chances? How would this change the answer? The risk to the EA community increases nonlinearly (since the EA community’s marginal returns on additional members aren’t constant) while the benefits of additional votes increase roughly linearly?
Also, there are mail-in ballots, although it might be too late in some places (I’m not informed either way, so don’t take my word for it).
This sounds like the evidential decision theory answer, and I’m not that familiar with these different decision theories. However, your decision to vote doesn’t cause these others to vote, it’s only evidence that they are likely to act similarly, right? Finding that out one way or another doesn’t actually make the world better or worse (compared to alternatives), it just clears up some uncertainty you had about what the world would look like. Otherwise, couldn’t you justify confirmation bias, e.g. telling your friends to selectively only share good news with you?
What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don’t seem to apply here. When you decide whether to vote you don’t decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it’s good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.
If I’m in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?
It doesn’t affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.
Imagine that while you’re in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you’re not deciding just for yourself. When we’re talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.
To decide whether they want this, shouldn’t they look at the chances? How would this change the answer? The risk to the EA community increases nonlinearly (since the EA community’s marginal returns on additional members aren’t constant) while the benefits of additional votes increase roughly linearly?
Also, there are mail-in ballots, although it might be too late in some places (I’m not informed either way, so don’t take my word for it).
This sounds like the evidential decision theory answer, and I’m not that familiar with these different decision theories. However, your decision to vote doesn’t cause these others to vote, it’s only evidence that they are likely to act similarly, right? Finding that out one way or another doesn’t actually make the world better or worse (compared to alternatives), it just clears up some uncertainty you had about what the world would look like. Otherwise, couldn’t you justify confirmation bias, e.g. telling your friends to selectively only share good news with you?
What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don’t seem to apply here. When you decide whether to vote you don’t decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it’s good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.
What does this mean? If I’m in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?
It doesn’t affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.
Imagine that while you’re in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you’re not deciding just for yourself. When we’re talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.
I don’t get it.
Wikipedia’s entry on superrationality probably explains the main idea here better than me.