Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
If Earth-based life is the only intelligent life that will ever emerge, then humans + other earth life going extinct makes the EV of the future basically 0, aside from non-human Earth-based life optimizing the universe, which would probably be less than 10% of non-extinct-human EV, due to the fact that
Humans being dead updates us towards other stuff eventually going extincts
Many things have to go right for a species to evolve pro-social tendencies in the way humans did, meaning it might not happen before the Earth becomes uninhabitable
This implies we should worry much more about X-Risks to all of Earth life (misaligned AI, nanotech) per unit of probablity than X risks to just humanity, due to the fact that all of Earth life dying would mean that the universe is permanently sterilized of value, while some other species picking up the torch would preserve some possibility of universe optimization, especially in worlds where CEV is very consistent across Earth life
If Earth-based life is not the only intelligent life that will ever emerge, then the stakes become much lower because we’ll only get our allotted bubble anyways, meaning that
If humans go extinct, then some alien species will eventually grab our part of space
Then the EV of the universe (that we can affect) is roughly bounded by how much big our bubble is (even including trade, becasue the most sensible portion of a trade deal is proportional to bubble size), which is probably on the scale of tens of thousands to billions of light-years(?) wide, bounding our portion of the universe to probably less than 1% of the non-alien scenario
This implies that we should care roughly equally about human-bounded and Earth-bounded X-risks per unit of probability, as there probably wouldn’t be time for another Earth species to pick up the torch between the time humans go extinct and the time Earth makes contact with aliens (at which point it’s game over)
Check out this post. My views from then have slightly shifted (the numbers stay roughly the same), towards:
If Earth-based life is the only intelligent life that will ever emerge, then humans + other earth life going extinct makes the EV of the future basically 0, aside from non-human Earth-based life optimizing the universe, which would probably be less than 10% of non-extinct-human EV, due to the fact that
Humans being dead updates us towards other stuff eventually going extincts
Many things have to go right for a species to evolve pro-social tendencies in the way humans did, meaning it might not happen before the Earth becomes uninhabitable
This implies we should worry much more about X-Risks to all of Earth life (misaligned AI, nanotech) per unit of probablity than X risks to just humanity, due to the fact that all of Earth life dying would mean that the universe is permanently sterilized of value, while some other species picking up the torch would preserve some possibility of universe optimization, especially in worlds where CEV is very consistent across Earth life
If Earth-based life is not the only intelligent life that will ever emerge, then the stakes become much lower because we’ll only get our allotted bubble anyways, meaning that
If humans go extinct, then some alien species will eventually grab our part of space
Then the EV of the universe (that we can affect) is roughly bounded by how much big our bubble is (even including trade, becasue the most sensible portion of a trade deal is proportional to bubble size), which is probably on the scale of tens of thousands to billions of light-years(?) wide, bounding our portion of the universe to probably less than 1% of the non-alien scenario
This implies that we should care roughly equally about human-bounded and Earth-bounded X-risks per unit of probability, as there probably wouldn’t be time for another Earth species to pick up the torch between the time humans go extinct and the time Earth makes contact with aliens (at which point it’s game over)