I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain a large amount of suffering. And if we don’t get to those futures, I’m worried about wild animal suffering being high in the meantime. Separately, I’m not sure addressing a lot of s-risk scenarios right now is particularly tractable (nor, more imminently, does wild animal suffering seem awfully tractable to me).
Probably the biggest reason I’m so close to the center is I think a significant amount of existential risk from AI looks like disempowering humanity without killing literally every human, and hence, I view AI alignment work as at least partially serving the goal of “increasing the value of futures where we survive.”
I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain a large amount of suffering. And if we don’t get to those futures, I’m worried about wild animal suffering being high in the meantime. Separately, I’m not sure addressing a lot of s-risk scenarios right now is particularly tractable (nor, more imminently, does wild animal suffering seem awfully tractable to me).
Probably the biggest reason I’m so close to the center is I think a significant amount of existential risk from AI looks like disempowering humanity without killing literally every human, and hence, I view AI alignment work as at least partially serving the goal of “increasing the value of futures where we survive.”