I think you have undervalued optionality value. Using Ctrl + F I have tried to find and summarise your claims against optionality value:
EA only has a modest amount of “control” [ I assuming control = optionality ]
EA won’t retain much “control” over the future
The argument for option value is based on circular logic
Counterpoint, short x-risk timelines would be good from the POV of someone making an optionality value argument
Counterpoint, optionality would be more important if alien’s exist and propagate negative value
humans existing limits option value similar [question, by similar do you mean equal to?] to that of non-existence
We can’t raise x-risk after we’ve lowered it
Without having thought about this for very long, I think the argument against optionality needs to be really really strong. Since you essentially need to demonstrate we have equal or better decision making abilities right now, than at any point in the future.
One of the reasons optionality seems like an exceptionally good argument, is that uncertainty exists both inside and outside EV models (i.e. you can model EV, and include some uncertainty, but then you need to account for uncertainty around the entire EV model because you’ve likely made a ton of assumptions during the process). And it’s extremely likely this uncertainty would remain constant overtime. One way we try to improve our models of the world is by making predictions and seeing if we were correct. The two reasons we do this are: making predictions is hard (so it’s test for a model that’s hard to pass) , and we have more information in the future.
The argument against optionality seems borderline tautological, because you essentially have to round all optionality value to 0, meaning the value of making predictions (and all over science, philosophy ect.) is also 0.
I am basically making a fanatical argument here for optionality, whereby the only consideration that trumps it is opportunity cost.
I think you have undervalued optionality value. Using Ctrl + F I have tried to find and summarise your claims against optionality value:
Without having thought about this for very long, I think the argument against optionality needs to be really really strong. Since you essentially need to demonstrate we have equal or better decision making abilities right now, than at any point in the future.
One of the reasons optionality seems like an exceptionally good argument, is that uncertainty exists both inside and outside EV models (i.e. you can model EV, and include some uncertainty, but then you need to account for uncertainty around the entire EV model because you’ve likely made a ton of assumptions during the process). And it’s extremely likely this uncertainty would remain constant overtime. One way we try to improve our models of the world is by making predictions and seeing if we were correct. The two reasons we do this are: making predictions is hard (so it’s test for a model that’s hard to pass) , and we have more information in the future.
The argument against optionality seems borderline tautological, because you essentially have to round all optionality value to 0, meaning the value of making predictions (and all over science, philosophy ect.) is also 0.
I am basically making a fanatical argument here for optionality, whereby the only consideration that trumps it is opportunity cost.