Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.
Not sure what the author’s argument is, but here’s my interpretation: AI risk being a Knightian uncertainty is an argument against assigning P(doom) to it.
Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.
Not sure what the author’s argument is, but here’s my interpretation: AI risk being a Knightian uncertainty is an argument against assigning P(doom) to it.