Interesting analysis. Here are a few points in response:
It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome — it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case — especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
You make a good point that the value depends on who it is and their counterfactuals. Most people looking at this are trying to work out the average value to defend against claims that voting is not typically morally justified, rather than trying to work out the case for particular groups such as EAs — though that is a relevant group for this forum.
In such empirical arguments, I’d be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I don’t think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you don’t do the things those people see as basic moral responsibilities).
In such empirical arguments, I’d be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
I think this relates to this (great!) post from Brian Tomasik. I was assuming the doubling of real GDP corresponded to all the benefits. I can see 0.1 % of the 30 k$ going to something of extremely high value, but it can arguably lead to extremely high disvalue too. In addition, I would say it is unclear whether increasing real GDP is good, because it does not forcefully lead to differential progress (e.g. it can increase carbon emissions, consumption of animal products, and shorten AI timelines). Some longtermist interventions seem more robustly good, not those around AI, but ones like patient philanthropy, or increasing pandemic preparedness, or civilisation resilience.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
Repeating my analysis for existential risk:
Based on the existential risk between 2021 and 2120 of 1⁄6 you guessed in The Precipice (which I really liked!), the annual existential risk is 0.182 % (= 1 - (1 − 1⁄6)^(1/100)).
If one assumes the benefit of one vote corresponds to eliminating 2 times the annual existential risk per capita (because maybe only half of the population votes), it would be 4.55*10^-13 (= 2*(0.182 %)/(8*10^9)). I may be underestimating the annual existential risk per capita due to high-income countries having greater influence, but underestimating due to existential risk arguably being lower early.
Assuming the LTFF has a cost-effectiveness of 3.16 bp/G$, which is the geometric mean of the lower and upper bound proposed by Linchuan Zhang here, the benefit of one vote would amount to donating about 1.44 $ (= 4.55/3.16) to the LTFF.
For a salary of 20 $/h, 1.44 $ is earned in 4 min. This is similar to what I got before, and continues to suggest one should not spend much time voting if the counterfactual is working on 80,000 Hours’ most pressing problems, or earning to support intervention aiming to solve them.
However, the analysis is so uncertain now that one can arrive to a different conclusion with reasonable inputs. So my overall take is that, neglecting indirect effects (like on how much people trust me), I do not know whether voting is worth it or not given that counterfactual.
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I don’t think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you don’t do the things those people see as basic moral responsibilities).
Makes sense. I think I have been voting mostly based on this, although I am not sure about whether it makes sense for me to do so.
Thanks Vasco,
Interesting analysis. Here are a few points in response:
It is best to take my piece as an input into a calculation of whether voting is morally justified on account of changing the outcome — it is an input in the form of helping work out the probability the outcome gets changed. More analysis would be needed to make the overall moral case — especially in the many voting systems that have multiple levels, where it may be much more important to vote in marginal seats and much less in safe seats, so taking the average may be inappropriate.
You make a good point that the value depends on who it is and their counterfactuals. Most people looking at this are trying to work out the average value to defend against claims that voting is not typically morally justified, rather than trying to work out the case for particular groups such as EAs — though that is a relevant group for this forum.
In such empirical arguments, I’d be cautious about claims that $1 dollar to the LTFF (or similar) is literally worth the same as $30,000 distributed across US citizens. Once the ratios get this extreme, you do need to worry more about issues like 0.1% of the $30,000 flowing through to extremely high value things and then outweighing the small targeted donation.
While you were trying to be very conservative by allocating a very large financial benefit to the better of the two parties, it is also relevant that who is in power at the time of the development of transformative AI capabilities could be directly relevant to existential risk, so even your generous accounting may be too small. (This factor will only apply in a small number of elections, but US presidential elections are likely one of them.)
I have a general presumption in favour of EAs acting as most people think morally responsible people should. In part because there is a good chance that the common-sense approach is tracking something important that our calculations may have lost sight of, in part because I don’t think we should be trying to optimise all aspects of our behaviour, and in part because it is a legible sign of moral earnestness (i.e. it is reasonable for people to trust you less if you don’t do the things those people see as basic moral responsibilities).
Thanks for the reply, great points!
I think this relates to this (great!) post from Brian Tomasik. I was assuming the doubling of real GDP corresponded to all the benefits. I can see 0.1 % of the 30 k$ going to something of extremely high value, but it can arguably lead to extremely high disvalue too. In addition, I would say it is unclear whether increasing real GDP is good, because it does not forcefully lead to differential progress (e.g. it can increase carbon emissions, consumption of animal products, and shorten AI timelines). Some longtermist interventions seem more robustly good, not those around AI, but ones like patient philanthropy, or increasing pandemic preparedness, or civilisation resilience.
Repeating my analysis for existential risk:
Based on the existential risk between 2021 and 2120 of 1⁄6 you guessed in The Precipice (which I really liked!), the annual existential risk is 0.182 % (= 1 - (1 − 1⁄6)^(1/100)).
If one assumes the benefit of one vote corresponds to eliminating 2 times the annual existential risk per capita (because maybe only half of the population votes), it would be 4.55*10^-13 (= 2*(0.182 %)/(8*10^9)). I may be underestimating the annual existential risk per capita due to high-income countries having greater influence, but underestimating due to existential risk arguably being lower early.
Assuming the LTFF has a cost-effectiveness of 3.16 bp/G$, which is the geometric mean of the lower and upper bound proposed by Linchuan Zhang here, the benefit of one vote would amount to donating about 1.44 $ (= 4.55/3.16) to the LTFF.
For a salary of 20 $/h, 1.44 $ is earned in 4 min. This is similar to what I got before, and continues to suggest one should not spend much time voting if the counterfactual is working on 80,000 Hours’ most pressing problems, or earning to support intervention aiming to solve them.
However, the analysis is so uncertain now that one can arrive to a different conclusion with reasonable inputs. So my overall take is that, neglecting indirect effects (like on how much people trust me), I do not know whether voting is worth it or not given that counterfactual.
Makes sense. I think I have been voting mostly based on this, although I am not sure about whether it makes sense for me to do so.