Excluding everything except the longtermist donations seems irrational. There is a lot of uncertainty around whether longtermist goals are even tractable, let alone whether the current longtermist charities are making or will make any useful progress (your link to 80,000 Hours’ 18 Most Pressing Problems is broken, but their pressing areas seem to include AI safety, preventing nuclear war, preventing great power conflict, improving governance, each of which have huge question marks around them when it comes to solutions). I think you’re overestimating the certainty and therefore value of the projects focusing on “creating a better future”
You should account for the potential positive sociopolitical effects that might come from a large bloc of professionals openly pledging a portion of income to effective charity. It has a potential to subtly normalise effective charity in the public consciousness, leading to more people donating more money to more effective charities and governments allocating more aid money more effectively. This theory of change is difficult to measure or prove, as for any social movement, but I don’t think it should be ignored.
your link to 80,000 Hours’ 18 Most Pressing Problems is broken
Fixed, thanks.
I think you’re overestimating the certainty and therefore value of the projects focusing on “creating a better future”
I actually think the uncertainty of longtermist interventions is much larger than that of neartermist ones, in the sense that the difference between a very good and very bad outcome is larger for longtermist interventions. However, given this large uncertainty, uncovering crucial considerations is very much at the forefront of longtermist analyses, and there is often a focus on trying to ensure the expected value is positive. So I believe the uncertainty around the sign of the expected value of longtermist interventions is lower.
You should account for the potential positive sociopolitical effects that might come from a large bloc of professionals openly pledging a portion of income to effective charity
Good point. I have added a point about this to the last bullet of the summary:
I believe it would be important to study (by descending order of importance) [I have added this parentheses]:
GWWC’s impact besides donations, which I believe may well be the driver of its overall impact. [This is the point I have added.]
The current marginal cost-effectiveness, which will tend to be lower than the (non-marginal) one I have estimated for 2020 to 2022 given diminishing marginal returns. These are supported by the negative correlation between the number of new pledges and donations caused by GWWC per new pledge.
The counterfactuality of the donations going to GWWC’s cause area of creating a better future, which in my view is the driver of the overall impact of the donations.
A couple of problems I have with this analysis:
Excluding everything except the longtermist donations seems irrational. There is a lot of uncertainty around whether longtermist goals are even tractable, let alone whether the current longtermist charities are making or will make any useful progress (your link to 80,000 Hours’ 18 Most Pressing Problems is broken, but their pressing areas seem to include AI safety, preventing nuclear war, preventing great power conflict, improving governance, each of which have huge question marks around them when it comes to solutions). I think you’re overestimating the certainty and therefore value of the projects focusing on “creating a better future”
You should account for the potential positive sociopolitical effects that might come from a large bloc of professionals openly pledging a portion of income to effective charity. It has a potential to subtly normalise effective charity in the public consciousness, leading to more people donating more money to more effective charities and governments allocating more aid money more effectively. This theory of change is difficult to measure or prove, as for any social movement, but I don’t think it should be ignored.
Hi Henry,
Nice points!
Fixed, thanks.
I actually think the uncertainty of longtermist interventions is much larger than that of neartermist ones, in the sense that the difference between a very good and very bad outcome is larger for longtermist interventions. However, given this large uncertainty, uncovering crucial considerations is very much at the forefront of longtermist analyses, and there is often a focus on trying to ensure the expected value is positive. So I believe the uncertainty around the sign of the expected value of longtermist interventions is lower.
Good point. I have added a point about this to the last bullet of the summary: