Artificial general intelligence (AGI) might soon cause lock-in to a certain extent. If this does not involve total loss of value[1], the vast majority of computation could still happen in the longterm future, but one would not be able to counterfactually increase/decrease welfare.
Sure, you don’t buy this but shouldn’t you still account for it in your prior? or is this just “a prior if we discount extinction”?
The prior is supposed to account for extinction risk.
However, a priori, one should arguably expect such risk to also be proportional to the expected value of effective computation this century as a fraction of that throughout all time [?]. Along the same lines, I wonder whether the vast majority of technological progress will happen in the longterm future.
Sure, you don’t buy this but shouldn’t you still account for it in your prior? or is this just “a prior if we discount extinction”?
Hi Nathan,
The prior is supposed to account for extinction risk.