I agree the share of individuals who would be convinced to vote based on such an argument seems pretty small. In particular, the share of people hearing these arguments seems pretty small, although maybe if you include far future beings, the share (or influence-weighted share) could be large.
It could matter for people who are concerned with difference-making and think the probability of making a difference is too low under standard causal decision theory and assign reasonably high probability to an infinite universe. See Can an evidentialist be risk-averse? by Hayden Wilkinson. Maybe on other views, too, but not risk neutral expected value-maximizing total utilitarianism.
I’m not sure. Very few people would use the term “correlation” here; but perhaps quite a few people sometimes reason along the lines of: “Should I (not) do X? What happens if many people (not) do it?”
But I guess that changing a decision based on such an argument wouldn’t be correlated to practically anyone else, no?
I agree the share of individuals who would be convinced to vote based on such an argument seems pretty small. In particular, the share of people hearing these arguments seems pretty small, although maybe if you include far future beings, the share (or influence-weighted share) could be large.
It could matter for people who are concerned with difference-making and think the probability of making a difference is too low under standard causal decision theory and assign reasonably high probability to an infinite universe. See Can an evidentialist be risk-averse? by Hayden Wilkinson. Maybe on other views, too, but not risk neutral expected value-maximizing total utilitarianism.
I’m not sure. Very few people would use the term “correlation” here; but perhaps quite a few people sometimes reason along the lines of: “Should I (not) do X? What happens if many people (not) do it?”