Hi Phib, Michael from the GWWC Research team here! In our latest impact evaluation we did need to consider how to think about future donations. We explain how we did this in the appendix âOur approach to discount ratesâ. Essentially, itâs a really complex topic, and youâre right that existential risk plays into it (we note this as one of the key considerations). If you discount the future just based on Ordâs existential risk estimates, based on some quick-maths, the 1 in 6 chance over 100 years should discount each year by 0.2% (1 - ((1 â 1â6)^(1/â100)) = 0.02).
Yet there are many other considerations that also weigh into this, at least from GWWCâs perspective. Most significantly is how we should expect the cost-effectiveness of charities to change over time.
We chose to use a discount rate of 3.5% for our best-guess estimates (and 5% for our conservative estimates); based on the recommendation from the UK governmentâs green book. We explain why we made that decision in our report. It was largely motivated by our framework of being useful/âtransparent/âjustifiable over being academically correct and thorough.
Hi Michael, thank you for the response, and I definitely should have checked out the full report to be more respectful of your time. Yeah, honestly seems really complex and I understand the need to prioritize, thanks for sharing.
Iâm not sure how to evaluate this, I see existential risk kind of being relegated to a bullet point in the appendix and that may be a good place for it considering the sophisticated scope and in such a report⌠but I am also trying to reconcile this with such (moderate?) estimates as Ordâs⌠where even humoring this chance seems to change a lot. Iâm also unsure that discount rates really capture the loss of value of x-risk, but maybe thatâs a more classic argument of near vs longtermism.
Also, wouldnât the above âx-risk discount rateâ be 2% rather than 0.2%?
I guess I am curious about this sort of tension between x-risk, transformative AI, and near-term plans for a lot of EA orgs (and this has been rather informative, thanks again!).
Also, wouldnât the above âx-risk discount rateâ be 2% rather than 0.2%?
There was a typo in my answer before: (1 - ((1 â 1â6)^(1/â100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out.
I think if we spent more time trying to come to an all-things-considered view on this topic, weâd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book.
In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). Itâs something on my mind, but itâs unclear to me what exactly the tension is. I still think itâs great to move money to effective charities across a range of impactful causes, and Iâm excited about building a culture of giving significantly and effectively throughout oneâs life (i.e., via the Pledge). I donât think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise Iâm not sure exactly what the potential for transformative AI should imply for GWWC.
I really think that the discount rate equation used just doesnât capture my intuitions about how impactful x-risk would be, but I think I will just leave it at that and stop bugging you (thanks for the thoughtful response again).
Of course, it seems at some point you have to stop the recursive utilitarian dilemma of analysis paralysis and probably in that report this is a good place.
Unsure as well, I think Iâm at the point of waiting, and doing my best to learn, since I think any claims as to just how transformative AI might be regarding the economy and even how we come to solve problems is a matter of respecting probabilities that Iâm uncertain about⌠to the extent that I guess having people just think about it is all I can ask (which it seems both you and GWWC are doing, in addition to all the impactful work yâall are doingâthank you again.)
Hi Phib, Michael from the GWWC Research team here! In our latest impact evaluation we did need to consider how to think about future donations. We explain how we did this in the appendix âOur approach to discount ratesâ. Essentially, itâs a really complex topic, and youâre right that existential risk plays into it (we note this as one of the key considerations). If you discount the future just based on Ordâs existential risk estimates, based on some quick-maths, the 1 in 6 chance over 100 years should discount each year by 0.2% (1 - ((1 â 1â6)^(1/â100)) = 0.02).
Yet there are many other considerations that also weigh into this, at least from GWWCâs perspective. Most significantly is how we should expect the cost-effectiveness of charities to change over time.
We chose to use a discount rate of 3.5% for our best-guess estimates (and 5% for our conservative estimates); based on the recommendation from the UK governmentâs green book. We explain why we made that decision in our report. It was largely motivated by our framework of being useful/âtransparent/âjustifiable over being academically correct and thorough.
If youâre interested in this topic, and on how to think about discount rates in general, you may find Founders Pledgeâs report on investing to give an interesting read.
Hi Michael, thank you for the response, and I definitely should have checked out the full report to be more respectful of your time. Yeah, honestly seems really complex and I understand the need to prioritize, thanks for sharing.
Iâm not sure how to evaluate this, I see existential risk kind of being relegated to a bullet point in the appendix and that may be a good place for it considering the sophisticated scope and in such a report⌠but I am also trying to reconcile this with such (moderate?) estimates as Ordâs⌠where even humoring this chance seems to change a lot. Iâm also unsure that discount rates really capture the loss of value of x-risk, but maybe thatâs a more classic argument of near vs longtermism.
Also, wouldnât the above âx-risk discount rateâ be 2% rather than 0.2%?
I guess I am curious about this sort of tension between x-risk, transformative AI, and near-term plans for a lot of EA orgs (and this has been rather informative, thanks again!).
No problem!
Regarding:
There was a typo in my answer before: (1 - ((1 â 1â6)^(1/â100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out.
I think if we spent more time trying to come to an all-things-considered view on this topic, weâd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book.
In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). Itâs something on my mind, but itâs unclear to me what exactly the tension is. I still think itâs great to move money to effective charities across a range of impactful causes, and Iâm excited about building a culture of giving significantly and effectively throughout oneâs life (i.e., via the Pledge). I donât think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise Iâm not sure exactly what the potential for transformative AI should imply for GWWC.
I really think that the discount rate equation used just doesnât capture my intuitions about how impactful x-risk would be, but I think I will just leave it at that and stop bugging you (thanks for the thoughtful response again).
Of course, it seems at some point you have to stop the recursive utilitarian dilemma of analysis paralysis and probably in that report this is a good place.
Unsure as well, I think Iâm at the point of waiting, and doing my best to learn, since I think any claims as to just how transformative AI might be regarding the economy and even how we come to solve problems is a matter of respecting probabilities that Iâm uncertain about⌠to the extent that I guess having people just think about it is all I can ask (which it seems both you and GWWC are doing, in addition to all the impactful work yâall are doingâthank you again.)