Interesting to hear this update. Presumably many of these are views that people working in WAW have heard before from critics. If you were to try to persuade someone who currently feels strongly about it as a cause that they would shift focus, what would you say are the key factors that might sway them?
If you are a negative utilitarian (i.e., you only care about reducing suffering) or you are pessimistic about the future, you may want to prioritize work that aims to reduce the potential suffering of future digital minds instead (for example, the work of organizations like The Center on Long-term Risk).
I would love to see more work done by regular/totalising utilitarians on how we could improve the expected quality (rather than quantity) of future life, even on the assumption that it will be generally positive!
Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.
It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.
I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
This is the standard justification for working on immediate extinction, but I think it’s weak. It seems reasonable as a case for looking at them as a cause area first, but ‘it’s hard to predict EV’ is a very poor proxy for ‘actually having low EV’ - IMO the movement has been very lazy about moving on from this early heuristic.
I don’t have anything concrete in mind about quality of life. I’ve been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I’m currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you’d be interested in having a look at that, it’s here :)
I do wonder whether a similar approach could be useful for quality of life, but haven’t put any serious thought into it.
Interesting to hear this update. Presumably many of these are views that people working in WAW have heard before from critics. If you were to try to persuade someone who currently feels strongly about it as a cause that they would shift focus, what would you say are the key factors that might sway them?
I would love to see more work done by regular/totalising utilitarians on how we could improve the expected quality (rather than quantity) of future life, even on the assumption that it will be generally positive!
Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.
It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.
This is the standard justification for working on immediate extinction, but I think it’s weak. It seems reasonable as a case for looking at them as a cause area first, but ‘it’s hard to predict EV’ is a very poor proxy for ‘actually having low EV’ - IMO the movement has been very lazy about moving on from this early heuristic.
I don’t have anything concrete in mind about quality of life. I’ve been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I’m currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you’d be interested in having a look at that, it’s here :)
I do wonder whether a similar approach could be useful for quality of life, but haven’t put any serious thought into it.