It feels like Etzioni is misunderstanding Bostrom in this article, but I’m not sure. His point about Pascal’s Wager confuses me:
Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable
Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni’s view) a very low-probability event?
I don’t know whether Bostrom thinks this or not, but isn’t Bostrom’s main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?
It doesn’t seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it’s unreasonable to worry about AI risk now and by saying that we’ll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.
It feels like Etzioni is misunderstanding Bostrom in this article, but I’m not sure. His point about Pascal’s Wager confuses me:
Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni’s view) a very low-probability event?
I don’t know whether Bostrom thinks this or not, but isn’t Bostrom’s main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?
It doesn’t seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it’s unreasonable to worry about AI risk now and by saying that we’ll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.