Steven the issue is without empirical data you end up with a branching tree of possible futures. And if you make some faulty assumptions early—such as assuming the amount of compute needed to host optimal AI models is small and easily stolen via hacking—you end up lost in a tree of possibilities where every one you consider is “doom”. And thus you arrive at the conclusion of “pDoom is 99 percent”, because you are only cognitively able to consider adjacent futures in the possibility tree. No living human can keep track of thousands of possibilities in parallel. This is where I think Eliezer and Zvi are lost, where they simply ignore branches that would lead to different outcomes.
(And vice versa, you could arrive at the opposite conclusion).
It becomes angels at the head of a pin. There is no way to make a policy decision based on this. You need to prove you beliefs with data. It’s how we even got here as a species.
Steven the issue is without empirical data you end up with a branching tree of possible futures. And if you make some faulty assumptions early—such as assuming the amount of compute needed to host optimal AI models is small and easily stolen via hacking—you end up lost in a tree of possibilities where every one you consider is “doom”. And thus you arrive at the conclusion of “pDoom is 99 percent”, because you are only cognitively able to consider adjacent futures in the possibility tree. No living human can keep track of thousands of possibilities in parallel. This is where I think Eliezer and Zvi are lost, where they simply ignore branches that would lead to different outcomes.
(And vice versa, you could arrive at the opposite conclusion).
It becomes angels at the head of a pin. There is no way to make a policy decision based on this. You need to prove you beliefs with data. It’s how we even got here as a species.