On 2, the structure of the objection is similar to Shulman’s response on 1: we’re not vanishingly unlikely to have very large (or even near maximal) population sizes. For instance, a variety of people (including longtermists) are interested in ultimately creating vast numbers of digital minds or other sources of value and there aren’t clearly opposing groups which directly have preferences against this happening. I don’t see a very strong analogy between current low fertility and long run cosmic resource utilization, and at a more basic level, current low fertility isn’t stable: even if the status quo continues for a long time (without e.g. the creation of powerful AI resulting in much faster progress in technology), selection will likely lead to the fertility rate increasing at some point in the future unless this is actively suppressed.
I can see the annual probability of the absolute value of the welfare of Earth-originating beings dropping to 0 becoming increasingly low, and their population increasingly large. However, I do not think this means decreasing the nearterm risk of human extinction is more cost-effective than donating to GiveWell’s top charities, or organisations working on invertebrate welfare.
Longtermists often estimate the expected value of the future from EV = p*V = “probability of reaching existential safety”*”expected value of the future conditional on reaching existential safety”. Yet, I am not aware of any modelling (as opposed to pure guesses) showing how decreases in the nearterm risk of human extinction translate into increases in p. I am not even aware of any detailed quantitative modelling estimating changes in the risk of human extinction.
I think decreasing the probability of worlds where humans go extinct soon will barely change p, instead just making a little more likely nearby worlds where humans go extinct slighly later. The easier way to decrease the risk of human extinction in 2025 is postponing it until to 2026, not until humans and their descendents have colonised the accessible universe.
It could also be that increasing V by 1 % via donating to GiveWell’s top charities, or organisations working on invertebrate welfare is cheaper than decreasing p by 1 %, and it would be equally valuable. I estimate the absolute value of the welfare of marine arthropods is 99.99996 % (= 1 − 1/(2.50*10^6)) of that of humans and marine arthropods. So I feel like more research on improving the welfare of wild arthropods may well be a cost-effective way of increasing V.
Carl Shulman’s response here responds to objection 1. You can also see the tag for the time of perils hypothesis for a bit more discussion.
On 2, the structure of the objection is similar to Shulman’s response on 1: we’re not vanishingly unlikely to have very large (or even near maximal) population sizes. For instance, a variety of people (including longtermists) are interested in ultimately creating vast numbers of digital minds or other sources of value and there aren’t clearly opposing groups which directly have preferences against this happening. I don’t see a very strong analogy between current low fertility and long run cosmic resource utilization, and at a more basic level, current low fertility isn’t stable: even if the status quo continues for a long time (without e.g. the creation of powerful AI resulting in much faster progress in technology), selection will likely lead to the fertility rate increasing at some point in the future unless this is actively suppressed.
Thanks for the good points, Ryan.
I can see the annual probability of the absolute value of the welfare of Earth-originating beings dropping to 0 becoming increasingly low, and their population increasingly large. However, I do not think this means decreasing the nearterm risk of human extinction is more cost-effective than donating to GiveWell’s top charities, or organisations working on invertebrate welfare.
Longtermists often estimate the expected value of the future from EV = p*V = “probability of reaching existential safety”*”expected value of the future conditional on reaching existential safety”. Yet, I am not aware of any modelling (as opposed to pure guesses) showing how decreases in the nearterm risk of human extinction translate into increases in p. I am not even aware of any detailed quantitative modelling estimating changes in the risk of human extinction.
I think decreasing the probability of worlds where humans go extinct soon will barely change p, instead just making a little more likely nearby worlds where humans go extinct slighly later. The easier way to decrease the risk of human extinction in 2025 is postponing it until to 2026, not until humans and their descendents have colonised the accessible universe.
It could also be that increasing V by 1 % via donating to GiveWell’s top charities, or organisations working on invertebrate welfare is cheaper than decreasing p by 1 %, and it would be equally valuable. I estimate the absolute value of the welfare of marine arthropods is 99.99996 % (= 1 − 1/(2.50*10^6)) of that of humans and marine arthropods. So I feel like more research on improving the welfare of wild arthropods may well be a cost-effective way of increasing V.