I think one should skip voting if it requires more than around 10 min, and the counterfactual is either working on 80,000 Hoursā most pressing problems, or donating to interventions which aim to solve them:
If we conservatively assume the total benefit of an election is doubling real GDP, and that half of the population votes, the benefit per voter is 4 times the real GDP per capita. This is about 120 k$ in the United States.
Donating to GiveDirectly (GD) is about 100 times as effective as increasing the GDP of the United States, so the benefit per voter amounts to donating 1.2 k$ to GD.
Donating to GiveWellās top charities is about 10 times as effective as to GD, so the benefit per voter amounts to donating 120 $ to e.g. Against Malaria Foundation.
From here, Benjamin Todd āwould donate to the Long Term Future Fund [LTFF] over the global health fund, and would expect it to be perhaps 10-100x [whose geometric mean is about 30] more cost-effective (and donating to global health is already very good)ā.
If donating to the LTFF is 30 times as effective as donating to GiveWellās top charities, the benefit per voter amounts to donating 4 $ to the LTFF. Note this supposes the total benefits correspond to doubling real GDP.
For a modest salary of 20 $/āh, 4 $ is earned in 12 min. Not much time if electronic voting is not possible.
So I tend to think spending more than 10 min on voting is not worth it if the counterfactual is either working on 80,000 Hoursā most pressing problems, or donating to interventions which aim to solve them (like those supported by the LTFF).
I suppose one could argue that participating in civil society via voting is a good norm, but I tend to think doing whatever is most impactful is a better one. One can also skip voting to donate to organisations working on voting reform, like the The Center for Election Science.
Am I missing something? I recognise some of the inputs I used are quite uncertain, but I think I mostly used conservative numbers.
Interesting analysis. Here are a few points in response:
It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome ā it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case ā especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
You make a good point that the value depends on who it is and their counterfactuals. Most people looking at this are trying to work out the average value to defend against claims that voting is not typically morally justified, rather than trying to work out the case for particular groups such as EAs ā though that is a relevant group for this forum.
In such empirical arguments, Iād be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I donāt think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you donāt do the things those people see as basic moral responsibilities).
In such empirical arguments, Iād be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
I think this relates to this (great!) post from Brian Tomasik. I was assuming the doubling of real GDP corresponded to all the benefits. I can see 0.1 % of the 30 k$ going to something of extremely high value, but it can arguably lead to extremely high disvalue too. In addition, I would say it is unclear whether increasing real GDP is good, because it does not forcefully lead to differential progress (e.g. it can increase carbon emissions, consumption of animal products, and shorten AI timelines). Some longtermist interventions seem more robustly good, not those around AI, but ones like patient philanthropy, or increasing pandemic preparedness, or civilisation resilience.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
Repeating my analysis for existential risk:
Based on the existential risk between 2021 and 2120 of 1ā6 you guessed in The Precipice (which I really liked!), the annual existential risk is 0.182 % (= 1 - (1 ā 1ā6)^(1/ā100)).
If one assumes the benefit of one vote corresponds to eliminating 2 times the annual existential risk per capita (because maybe only half of the population votes), it would be 4.55*10^-13 (= 2*(0.182 %)/ā(8*10^9)). I may be underestimating the annual existential risk per capita due to high-income countries having greater influence, but underestimating due to existential risk arguably being lower early.
Assuming the LTFF has a cost-effectiveness of 3.16 bp/āG$, which is the geometric mean of the lower and upper bound proposed by Linchuan Zhang here, the benefit of one vote would amount to donating about 1.44 $ (= 4.55/ā3.16) to the LTFF.
For a salary of 20 $/āh, 1.44 $ is earned in 4 min. This is similar to what I got before, and continues to suggest one should not spend much time voting if the counterfactual is working on 80,000 Hoursā most pressing problems, or earning to support intervention aiming to solve them.
However, the analysis is so uncertain now that one can arrive to a different conclusion with reasonable inputs. So my overall take is that, neglecting indirect effects (like on how much people trust me), I do not know whether voting is worth it or not given that counterfactual.
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I donāt think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you donāt do the things those people see as basic moral responsibilities).
Makes sense. I think I have been voting mostly based on this, although I am not sure about whether it makes sense for me to do so.
Nice post, Toby!
I think one should skip voting if it requires more than around 10 min, and the counterfactual is either working on 80,000 Hoursā most pressing problems, or donating to interventions which aim to solve them:
If we conservatively assume the total benefit of an election is doubling real GDP, and that half of the population votes, the benefit per voter is 4 times the real GDP per capita. This is about 120 k$ in the United States.
Donating to GiveDirectly (GD) is about 100 times as effective as increasing the GDP of the United States, so the benefit per voter amounts to donating 1.2 k$ to GD.
Donating to GiveWellās top charities is about 10 times as effective as to GD, so the benefit per voter amounts to donating 120 $ to e.g. Against Malaria Foundation.
From here, Benjamin Todd āwould donate to the Long Term Future Fund [LTFF] over the global health fund, and would expect it to be perhaps 10-100x [whose geometric mean is about 30] more cost-effective (and donating to global health is already very good)ā.
If donating to the LTFF is 30 times as effective as donating to GiveWellās top charities, the benefit per voter amounts to donating 4 $ to the LTFF. Note this supposes the total benefits correspond to doubling real GDP.
For a modest salary of 20 $/āh, 4 $ is earned in 12 min. Not much time if electronic voting is not possible.
So I tend to think spending more than 10 min on voting is not worth it if the counterfactual is either working on 80,000 Hoursā most pressing problems, or donating to interventions which aim to solve them (like those supported by the LTFF).
I suppose one could argue that participating in civil society via voting is a good norm, but I tend to think doing whatever is most impactful is a better one. One can also skip voting to donate to organisations working on voting reform, like the The Center for Election Science.
Am I missing something? I recognise some of the inputs I used are quite uncertain, but I think I mostly used conservative numbers.
Thanks Vasco,
Interesting analysis. Here are a few points in response:
It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome ā it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case ā especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
You make a good point that the value depends on who it is and their counterfactuals. Most people looking at this are trying to work out the average value to defend against claims that voting is not typically morally justified, rather than trying to work out the case for particular groups such as EAs ā though that is a relevant group for this forum.
In such empirical arguments, Iād be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I donāt think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you donāt do the things those people see as basic moral responsibilities).
Thanks for the reply, great points!
I think this relates to this (great!) post from Brian Tomasik. I was assuming the doubling of real GDP corresponded to all the benefits. I can see 0.1 % of the 30 k$ going to something of extremely high value, but it can arguably lead to extremely high disvalue too. In addition, I would say it is unclear whether increasing real GDP is good, because it does not forcefully lead to differential progress (e.g. it can increase carbon emissions, consumption of animal products, and shorten AI timelines). Some longtermist interventions seem more robustly good, not those around AI, but ones like patient philanthropy, or increasing pandemic preparedness, or civilisation resilience.
Repeating my analysis for existential risk:
Based on the existential risk between 2021 and 2120 of 1ā6 you guessed in The Precipice (which I really liked!), the annual existential risk is 0.182 % (= 1 - (1 ā 1ā6)^(1/ā100)).
If one assumes the benefit of one vote corresponds to eliminating 2 times the annual existential risk per capita (because maybe only half of the population votes), it would be 4.55*10^-13 (= 2*(0.182 %)/ā(8*10^9)). I may be underestimating the annual existential risk per capita due to high-income countries having greater influence, but underestimating due to existential risk arguably being lower early.
Assuming the LTFF has a cost-effectiveness of 3.16 bp/āG$, which is the geometric mean of the lower and upper bound proposed by Linchuan Zhang here, the benefit of one vote would amount to donating about 1.44 $ (= 4.55/ā3.16) to the LTFF.
For a salary of 20 $/āh, 1.44 $ is earned in 4 min. This is similar to what I got before, and continues to suggest one should not spend much time voting if the counterfactual is working on 80,000 Hoursā most pressing problems, or earning to support intervention aiming to solve them.
However, the analysis is so uncertain now that one can arrive to a different conclusion with reasonable inputs. So my overall take is that, neglecting indirect effects (like on how much people trust me), I do not know whether voting is worth it or not given that counterfactual.
Makes sense. I think I have been voting mostly based on this, although I am not sure about whether it makes sense for me to do so.