Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.
It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.
I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
This is the standard justification for working on immediate extinction, but I think it’s weak. It seems reasonable as a case for looking at them as a cause area first, but ‘it’s hard to predict EV’ is a very poor proxy for ‘actually having low EV’ - IMO the movement has been very lazy about moving on from this early heuristic.
I don’t have anything concrete in mind about quality of life. I’ve been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I’m currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you’d be interested in having a look at that, it’s here :)
I do wonder whether a similar approach could be useful for quality of life, but haven’t put any serious thought into it.
Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.
It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.
This is the standard justification for working on immediate extinction, but I think it’s weak. It seems reasonable as a case for looking at them as a cause area first, but ‘it’s hard to predict EV’ is a very poor proxy for ‘actually having low EV’ - IMO the movement has been very lazy about moving on from this early heuristic.
I don’t have anything concrete in mind about quality of life. I’ve been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I’m currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you’d be interested in having a look at that, it’s here :)
I do wonder whether a similar approach could be useful for quality of life, but haven’t put any serious thought into it.