I Faild to point out my central assumpton here, that Transformative AI in our current state of poor preparedness is net negative due to the existential risk it entails.
Its a good point about time pre transformative AI not being so valuable in the grand scheme of the future, but that ev would increase substantally assuming transformative AI is the end.
Still looking for the fleshing out of this argument that I don’t understand—if anyone can be bothered!
”It seems to me the argument would have to be that the advantage to the safety work of improving capabilities would outstrip the increasing risk of dangerous GAI, which I find hard to get my head around, but I might be missing something important.”
Thanks Aaron appreciate the effort.
I Faild to point out my central assumpton here, that Transformative AI in our current state of poor preparedness is net negative due to the existential risk it entails.
Its a good point about time pre transformative AI not being so valuable in the grand scheme of the future, but that ev would increase substantally assuming transformative AI is the end.
Still looking for the fleshing out of this argument that I don’t understand—if anyone can be bothered!
”It seems to me the argument would have to be that the advantage to the safety work of improving capabilities would outstrip the increasing risk of dangerous GAI, which I find hard to get my head around, but I might be missing something important.”