Also, for a bunch of reasons that I don’t currently feel like elaborating on, I expect humans to anticipate, test for, and circumvent the most egregious forms of AI deception in practice. The most important point here is that I’m not convinced that incentives for deception are much worse for AIs than for other actors in different training regimes (including humans, uplifted dogs, and aliens).
I don’t strongly disagree with either of these claims, but this isn’t exactly where my crux lies.
The key thing is “generally ruthlessly pursuing reward”.
The key thing is “generally ruthlessly pursuing reward”.
It depends heavily on what you mean by this, but I’m kinda skeptical of the strong version of ruthless reward seekers, for similar reasons given in this post. I think AIs by default might be ruthless in some other senses—since we’ll be applying a lot of selection pressure to them to get good behavior—but I’m not really sure how how much weight to put on the fact that AIs will be “ruthless” when evaluating how good they are at being our successors. It’s not clear how that affects my evaluation of how much I’d be OK handing the universe over to them, and my guess is the answer is “not much” (absent more details).
Humans seem pretty ruthless in certain respects too, e.g. about survival, or increasing their social status. I’d expect aliens, and potentially uplifted dogs to be ruthless too along some axes depending on how we uplifted them.
I don’t strongly disagree with either of these claims, but this isn’t exactly where my crux lies.
The key thing is “generally ruthlessly pursuing reward”.
I’m checking out of this conversation though.
It depends heavily on what you mean by this, but I’m kinda skeptical of the strong version of ruthless reward seekers, for similar reasons given in this post. I think AIs by default might be ruthless in some other senses—since we’ll be applying a lot of selection pressure to them to get good behavior—but I’m not really sure how how much weight to put on the fact that AIs will be “ruthless” when evaluating how good they are at being our successors. It’s not clear how that affects my evaluation of how much I’d be OK handing the universe over to them, and my guess is the answer is “not much” (absent more details).
Humans seem pretty ruthless in certain respects too, e.g. about survival, or increasing their social status. I’d expect aliens, and potentially uplifted dogs to be ruthless too along some axes depending on how we uplifted them.
Alright, that’s fine.