I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.
That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don’t voluntarily give up. “AI will be very capable” is not the same thing as “AI will be capable of 100% guaranteed conquering all of humanity”, you need a joining argument in the middle.
I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.
That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don’t voluntarily give up. “AI will be very capable” is not the same thing as “AI will be capable of 100% guaranteed conquering all of humanity”, you need a joining argument in the middle.