This makes sense to me, although I think we may not be able to assume a unique “true” model and prior even after all the time we want to think and use information that’s already accessible. I think we could still have deep uncertainty after this; there might still be multiple distributions that are “equally” plausible, but no good way to choose a prior over them (with finitely many, we could use a uniform prior, but this still might seem wrong), so any choice would be arbitrary and what we do might depend on such an arbitrary choice.
For example, how intense are the valenced experiences of insects and how much do they matter? I think no amount of time with access to all currently available information and thoughts would get me to a unique distribution. Some or most of this is moral uncertainty, too, and there might not even be any empirical fact of the matter about how much more intense one experience is than another (I suspect there isn’t).
Or, for the US election, I think there was little precedent for some of the considerations this election (how coronavirus would affect voting and polling), so thinking much more about them could have only narrowed the set of plausible distributions so much.
I think I’d still not be willing to commit to a unique AI risk distribution with as much time as I wanted and perfect rationality but only the information that’s currently accessible.
This makes sense to me, although I think we may not be able to assume a unique “true” model and prior even after all the time we want to think and use information that’s already accessible. I think we could still have deep uncertainty after this; there might still be multiple distributions that are “equally” plausible, but no good way to choose a prior over them (with finitely many, we could use a uniform prior, but this still might seem wrong), so any choice would be arbitrary and what we do might depend on such an arbitrary choice.
For example, how intense are the valenced experiences of insects and how much do they matter? I think no amount of time with access to all currently available information and thoughts would get me to a unique distribution. Some or most of this is moral uncertainty, too, and there might not even be any empirical fact of the matter about how much more intense one experience is than another (I suspect there isn’t).
Or, for the US election, I think there was little precedent for some of the considerations this election (how coronavirus would affect voting and polling), so thinking much more about them could have only narrowed the set of plausible distributions so much.
I think I’d still not be willing to commit to a unique AI risk distribution with as much time as I wanted and perfect rationality but only the information that’s currently accessible.
See also this thread.