Are you saying AIs trained this way won’t be agents?
Not especially. If I had to state it simply, it’s that massive space for instrumental goals isn’t useful today, and plausibly in the future for capabilities, so we have at least some reason to not worry about misalignment AI risk as much as we do today.
In particular, it means that we shouldn’t assume instrumental goals to appear by default, and to avoid overrelying on non-empirical approaches like your intuition or imagination. We have to take things on a case-by-case basis, rather than using broad judgements.
Note that instrumental convergence/instrumental goals isn’t a binary, but rather a space, where more space for instrumental goals being useful for capabilities is continuously bad, rather than a sharp binary of instrumental goals being active or not active.
My claim is that the evidence we have is evidence against much space for instrumental convergence being useful for capabilities, and I expect this trend to continue, at least partially as AI progresses.
Yet I suspect that this isn’t hitting at your true worry, and I want to address it today. I suspect that your true worry is this quote below:
And regardless of whatever else you’re saying, how can you feel safe that the next training regime won’t lead to instrumental convergence?
And while I can’t answer that question totally, I’d like to suggest going on a walk, drinking water, or in the worst case getting mental help from a professional. But try to stop the loop of never feeling safe around something.
The reason I’m suggesting this is because the problem with acting on your need to feel safe is that the following would happen:
This would, if adopted leave us vulnerable to arbitrarily high demands for safety, possibly crippling AI use cases, and as a general policy I’m not a fan of actions that would result in arbitrarily high demands for something, at least without scrutinizing it very heavily, and would require way, way more evidence than just a feeling.
We have no reason to assume that people’s feelings of safety or unsafety actually are connected to the real evidence of whether AI is safe, or whether misalignment risk of AI is big problem. Your feelings are real, but I don’t trust that your feeling of unsafety of AI is telling me anything else other than your feelings about something. This is fine, to the extent that it isn’t harming you materially, but it’s an important thing to note here.
Kaj Sotala made a similar post, which talks about why you should mostly feel safe. It’s a different discussion than my comment, but the post below may be useful:
EDIT 1: I deeply hope you can feel better, no matter what happens in the AI space.
EDIT 2: One thing to keep in mind in general is that in typical cases, when claims that something is more or less anything based on x evidence, this is usually smoothly less or more, rather than something going to zero of something or all of something, so in this case I’m claiming that AI is less dangerous, probably a lot less dangerous, but it doesn’t mean we totally erase the danger, it just means that things are more safe and things have gotten smoothly better based on our evidence to date.
Not especially. If I had to state it simply, it’s that massive space for instrumental goals isn’t useful today, and plausibly in the future for capabilities, so we have at least some reason to not worry about misalignment AI risk as much as we do today.
In particular, it means that we shouldn’t assume instrumental goals to appear by default, and to avoid overrelying on non-empirical approaches like your intuition or imagination. We have to take things on a case-by-case basis, rather than using broad judgements.
Note that instrumental convergence/instrumental goals isn’t a binary, but rather a space, where more space for instrumental goals being useful for capabilities is continuously bad, rather than a sharp binary of instrumental goals being active or not active.
My claim is that the evidence we have is evidence against much space for instrumental convergence being useful for capabilities, and I expect this trend to continue, at least partially as AI progresses.
Yet I suspect that this isn’t hitting at your true worry, and I want to address it today. I suspect that your true worry is this quote below:
And while I can’t answer that question totally, I’d like to suggest going on a walk, drinking water, or in the worst case getting mental help from a professional. But try to stop the loop of never feeling safe around something.
The reason I’m suggesting this is because the problem with acting on your need to feel safe is that the following would happen:
This would, if adopted leave us vulnerable to arbitrarily high demands for safety, possibly crippling AI use cases, and as a general policy I’m not a fan of actions that would result in arbitrarily high demands for something, at least without scrutinizing it very heavily, and would require way, way more evidence than just a feeling.
We have no reason to assume that people’s feelings of safety or unsafety actually are connected to the real evidence of whether AI is safe, or whether misalignment risk of AI is big problem. Your feelings are real, but I don’t trust that your feeling of unsafety of AI is telling me anything else other than your feelings about something. This is fine, to the extent that it isn’t harming you materially, but it’s an important thing to note here.
Kaj Sotala made a similar post, which talks about why you should mostly feel safe. It’s a different discussion than my comment, but the post below may be useful:
https://www.lesswrong.com/posts/pPLcrBzcog4wdLcnt/most-people-should-probably-feel-safe-most-of-the-time
EDIT 1: I deeply hope you can feel better, no matter what happens in the AI space.
EDIT 2: One thing to keep in mind in general is that in typical cases, when claims that something is more or less anything based on x evidence, this is usually smoothly less or more, rather than something going to zero of something or all of something, so in this case I’m claiming that AI is less dangerous, probably a lot less dangerous, but it doesn’t mean we totally erase the danger, it just means that things are more safe and things have gotten smoothly better based on our evidence to date.