As I think you agree, the optimal future can be extremely good and exotic.
It seems that in your reply to Zach, you are saying that x risk reduction is moving us to this future by definition. However, I can’t immediately find this definition in the link you provided.
(There might be some fuzzy thinking below, I am just typing quickly.)
Removing the risk of AI or nanobots, and just keeping humans “in the game” in 2121, just as we are in 2021, is valuable, but I don’t think this is the same as moving us to the awesome future.
I think moving us 1⁄10,000 to the awesome future could be a really strong statement.
Well put. I most often find it useful to think in terms of awesome future vs the alternative, but this isn’t the default definition of existential risk, and certainly not of existential-catastrophe-by-2121.
As I think you agree, the optimal future can be extremely good and exotic.
It seems that in your reply to Zach, you are saying that x risk reduction is moving us to this future by definition. However, I can’t immediately find this definition in the link you provided.
(There might be some fuzzy thinking below, I am just typing quickly.)
Removing the risk of AI or nanobots, and just keeping humans “in the game” in 2121, just as we are in 2021, is valuable, but I don’t think this is the same as moving us to the awesome future.
I think moving us 1⁄10,000 to the awesome future could be a really strong statement.
Well put. I most often find it useful to think in terms of awesome future vs the alternative, but this isn’t the default definition of existential risk, and certainly not of existential-catastrophe-by-2121.