My main qualitative reaction to this is that the buckets “permanently improving life quality” and “reducing extinction risk” are unusual and might not representative of what these fields generally do. Like, if you put it like the above, my intuition says that improving life quality is a lot better. But my (pretty gut level) conclusion is the opposite, i.e., that long-term AI stuff is therefore more important because it’ll also have a greater impact on long term happiness than WAW, which in most cases probably won’t affect the long term at all.
I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think “conventional” WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and I’m now of the belive that more direct work into how we can mitigate such risk is more impactful.
My main qualitative reaction to this is that the buckets “permanently improving life quality” and “reducing extinction risk” are unusual and might not representative of what these fields generally do. Like, if you put it like the above, my intuition says that improving life quality is a lot better. But my (pretty gut level) conclusion is the opposite, i.e., that long-term AI stuff is therefore more important because it’ll also have a greater impact on long term happiness than WAW, which in most cases probably won’t affect the long term at all.
I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think “conventional” WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and I’m now of the belive that more direct work into how we can mitigate such risk is more impactful.