Interesting read, and a tricky topic! A few thoughts:
What were the reasons for tentatively suggesting using the median estimate of the commenters, rather than being consistent with the SoGive neartermist threshold?
One reason against using the very high-end of the range is the plausible existence of alien civilisations. If humanity goes extinct, but there are many other potential civilisations and we think they have similar moral value to humans, then preventing human extinction is less valuable.
You could try using an adapted version of the Drake equation to estimate how many civilisations there might be (some of the parameters would have to be changed to take into account the different context, i.e. you’re not just estimating current civilizations that could currently communicate with us in the Milky Way, but the number there could be in the Local Supercluster)
I’m still not entirely sure what the purpose of the threshold would be.
The most obvious reason is to compare longtermist causes with neartermist ones, to understanding the opportunity cost—in which case I think this threshold should be consistent with the other SoGive benchmarks/thresholds (i.e. what you did with your initial calculations).
Indeed the lower end estimate (only valuing existing life) would be useful for donors who take a completely neartermist perspective, but who aren’t set on supporting (e.g.) health and development charities
If the aim is to be selective amongst longtermist causes so that you’re not just funding all (or none) of them, then why not just donate to the most cost-effective causes (starting with the most cost-effective) until your funding runs out?
I suppose this is where the giving now vs giving later point comes in. But in this case I’m not sure how you could try to set a threshold a priori?
It seems like you need some estimates of cost-effectiveness first. Then (e.g.) choose to fund the top x% of interventions in one year, and use this to inform the threshold in subsequent years. Depending on the apparent distribution of the initial cost-effectiveness estimates, you might decide ‘actually, we think there are plenty of interventions out there that are better than all the ones we have seen so far, if only we search a little bit harder’
Trying to incentivise more robust thinking around the cost-effectiveness of individual longtermist projects seems really valuable! I’d like to see more engagement by those working on such projects. Perhaps SoGive can help enable such engagement :)
My estimate was just one estimate. I could have included it in the table but when I did the table it seemed like such an outlier, and done with a totally different method as well, perhaps useful for a different purpose… It might be worth adding it into the table? Not sure.
Interesting consideration! If we expect humanity to at one point technologize the LS, and extinction prevents that, don’t we still lose all those lives? It would not eradicate all life if there were aliens, but still the same amount of life in total. (I’m not endorsing any one prediction for how large the future will be.) My formulas here don’t quantify how much worse it is to lose 100% of life than 99% of life.
Sure, you could set your threshold differently depending on your purpose. I could have made this clearer!
Exactly as you say, comparing across cause areas, you might want to keep the cost you’re willing to pay for an outcome (a life) consistent.
If you’ve decided on a worldview diversification strategy that gives you separate buckets for different cause areas (e.g. by credence instead of by stakes), then you’d want to set your threshold separately for different cause areas, and use each threshold to compare within a cause area. If you set a threshold for what you’re willing to pay for a life within longtermist interventions, and fewer funding opportunities live up to that compared to the amount of money you have available, you can save some of your money in that bucket and donate it later, in the hopes that new opportunities that meet your threshold can arise. For an example of giving later based on a threshold, Open Philanthropy wants to give money each year to projects that are more cost-effective than what they will spend their “last dollar” on.
Re 2 - ah yeah, I was assuming that at least one alien civilisation would aim to ‘technologize the Local Supercluster’ if humans didn’t. If they all just decided to stick to their own solar system or not spread sentience/digital minds, then of course that would be a loss of experiences.
Interesting read, and a tricky topic! A few thoughts:
What were the reasons for tentatively suggesting using the median estimate of the commenters, rather than being consistent with the SoGive neartermist threshold?
One reason against using the very high-end of the range is the plausible existence of alien civilisations. If humanity goes extinct, but there are many other potential civilisations and we think they have similar moral value to humans, then preventing human extinction is less valuable.
You could try using an adapted version of the Drake equation to estimate how many civilisations there might be (some of the parameters would have to be changed to take into account the different context, i.e. you’re not just estimating current civilizations that could currently communicate with us in the Milky Way, but the number there could be in the Local Supercluster)
I’m still not entirely sure what the purpose of the threshold would be.
The most obvious reason is to compare longtermist causes with neartermist ones, to understanding the opportunity cost—in which case I think this threshold should be consistent with the other SoGive benchmarks/thresholds (i.e. what you did with your initial calculations).
Indeed the lower end estimate (only valuing existing life) would be useful for donors who take a completely neartermist perspective, but who aren’t set on supporting (e.g.) health and development charities
If the aim is to be selective amongst longtermist causes so that you’re not just funding all (or none) of them, then why not just donate to the most cost-effective causes (starting with the most cost-effective) until your funding runs out?
I suppose this is where the giving now vs giving later point comes in. But in this case I’m not sure how you could try to set a threshold a priori?
It seems like you need some estimates of cost-effectiveness first. Then (e.g.) choose to fund the top x% of interventions in one year, and use this to inform the threshold in subsequent years. Depending on the apparent distribution of the initial cost-effectiveness estimates, you might decide ‘actually, we think there are plenty of interventions out there that are better than all the ones we have seen so far, if only we search a little bit harder’
Trying to incentivise more robust thinking around the cost-effectiveness of individual longtermist projects seems really valuable! I’d like to see more engagement by those working on such projects. Perhaps SoGive can help enable such engagement :)
Thanks Matt!
My estimate was just one estimate. I could have included it in the table but when I did the table it seemed like such an outlier, and done with a totally different method as well, perhaps useful for a different purpose… It might be worth adding it into the table? Not sure.
Interesting consideration! If we expect humanity to at one point technologize the LS, and extinction prevents that, don’t we still lose all those lives? It would not eradicate all life if there were aliens, but still the same amount of life in total. (I’m not endorsing any one prediction for how large the future will be.) My formulas here don’t quantify how much worse it is to lose 100% of life than 99% of life.
Sure, you could set your threshold differently depending on your purpose. I could have made this clearer!
Exactly as you say, comparing across cause areas, you might want to keep the cost you’re willing to pay for an outcome (a life) consistent.
If you’ve decided on a worldview diversification strategy that gives you separate buckets for different cause areas (e.g. by credence instead of by stakes), then you’d want to set your threshold separately for different cause areas, and use each threshold to compare within a cause area. If you set a threshold for what you’re willing to pay for a life within longtermist interventions, and fewer funding opportunities live up to that compared to the amount of money you have available, you can save some of your money in that bucket and donate it later, in the hopes that new opportunities that meet your threshold can arise. For an example of giving later based on a threshold, Open Philanthropy wants to give money each year to projects that are more cost-effective than what they will spend their “last dollar” on.
Thanks, me too!
Re 2 - ah yeah, I was assuming that at least one alien civilisation would aim to ‘technologize the Local Supercluster’ if humans didn’t. If they all just decided to stick to their own solar system or not spread sentience/digital minds, then of course that would be a loss of experiences.
Thanks for clarifying 1 and 3!