Nice post. It’s also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don’t think it’s important to create new lives for reasons other than holding a person-affecting view.
I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called “extinction” event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of “changing the direction of the future” rather than “making sure there is a future”.
I think it’s an open question whether “even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future.” But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of “long-term technological/cultural path dependence/lock in,” rather than the GCR category (though that wasn’t the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.
Nice post. It’s also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don’t think it’s important to create new lives for reasons other than holding a person-affecting view.
I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called “extinction” event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of “changing the direction of the future” rather than “making sure there is a future”.
I think it’s an open question whether “even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future.” But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of “long-term technological/cultural path dependence/lock in,” rather than the GCR category (though that wasn’t the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.