getting the “multiply” part right is sufficient, AI will take care of the “satisfaction” part on its own
I’m struggling to articulate how confused this seems in the context of machine learning. (I think my first objection is something like: the way in which “multiply” could be specified and the way in which an AI system pursues satisfaction are very different; one could be an aspect of the AI’s training process, while another is an aspect of the AI’s behavior. So even if these two concepts each describe aspects of the AI system’s objectives/behavior, that doesn’t mean its goal is to “multiply satisfaction.” That’s sort of like arguing that a sink gets built to be sturdy, and it gives people water, therefore it gives people sturdy water—we can’t just mash together related concepts and assume our claims about them will be right.)
I am familiar with the basics of ML and the concept of mesa-optimizers. “Building copies of itself” (i.e. multiply) is an optimization goal you’d have to specifically train into the system, I don’t argue with that, I just think it’s a simple and “natural” (in the sense it aligns reasonably well with instrumental convergence) goal that you can robustly train it comparatively easily.
“Satisfaction” however, is not a term that I’ve met in ML or mesa-optimizers context, and I think the confusion comes from us mapping this term differently onto these domains. In my view, “satisfaction” roughly corresponds to “loss function minimization” in the ML terminology—the lower an AIs loss function, the higher satisfaction it “experiences” (literally or metaphorically, depending on the kind of AI). Since any AI [built under the modern paradigm] is already working to minimize its own loss function, whatever that happened to be, we wouldn’t need to care much about the exact shape of the loss function it learns, except that it should robustly include “building copy of itself”. And since we’re presumably talking about a super-human AIs here, they would be very good at minimizing that loss function. So e.g. they can have some stupid goal like “maximize paperclips & build copies of self”, they’ll convert the universe to some mix of paperclips and AIs and experience extremely high satisfaction about it.
But you seem to be meaning something very different when you say “satisfaction”? Do you mind stating explicitly what it is?
Ah sorry, I had totally misunderstood your previous comment. (I had interpreted “multiply” very differently.) With that context, I retract my last response.
By “satisfaction” I meant high performance on its mesa-objective (insofar as it has one), though I suspect our different intuitions come from elsewhere.
it should robustly include “building copy of itself”
I think I’m still skeptical on two points:
Whether this is significantly easier than other complex goals
(The “robustly” part seems hard.)
Whether this actually leads to a near-best outcome according to total preference utilitarianism
If satisfying some goals is cheaper than satisfying others to the same extent, then the details of the goal matter a lot
As a kind of silly example, “maximize silicon & build copies of self” might be much easier to satisfy than “maximize paperclips & build copies of self.” If so, a (total) preference utilitarian would consider it very important that agents have the former goal rather than the latter.
>By “satisfaction” I meant high performance on its mesa-objective
Yeah, I’d agree with this definition.
I don’t necessarily agree with your two points of skepticism, for the first one I’ve already mentioned my reasons, for the second one it’s true in principle but it seems almost anything an AI would learn semi-accidentally is going to be much simpler and more intrinsically consistent than human values. But low confidence on both and in any case that’s kind of beyond the point, I was mostly trying to understand your perspective on what utility is.
I’m struggling to articulate how confused this seems in the context of machine learning. (I think my first objection is something like: the way in which “multiply” could be specified and the way in which an AI system pursues satisfaction are very different; one could be an aspect of the AI’s training process, while another is an aspect of the AI’s behavior. So even if these two concepts each describe aspects of the AI system’s objectives/behavior, that doesn’t mean its goal is to “multiply satisfaction.” That’s sort of like arguing that a sink gets built to be sturdy, and it gives people water, therefore it gives people sturdy water—we can’t just mash together related concepts and assume our claims about them will be right.)
(If you’re not yet familiar with the basics of machine learning and this distinction, I think that could be helpful context.)
I am familiar with the basics of ML and the concept of mesa-optimizers. “Building copies of itself” (i.e. multiply) is an optimization goal you’d have to specifically train into the system, I don’t argue with that, I just think it’s a simple and “natural” (in the sense it aligns reasonably well with instrumental convergence) goal that you can robustly train it comparatively easily.
“Satisfaction” however, is not a term that I’ve met in ML or mesa-optimizers context, and I think the confusion comes from us mapping this term differently onto these domains. In my view, “satisfaction” roughly corresponds to “loss function minimization” in the ML terminology—the lower an AIs loss function, the higher satisfaction it “experiences” (literally or metaphorically, depending on the kind of AI). Since any AI [built under the modern paradigm] is already working to minimize its own loss function, whatever that happened to be, we wouldn’t need to care much about the exact shape of the loss function it learns, except that it should robustly include “building copy of itself”. And since we’re presumably talking about a super-human AIs here, they would be very good at minimizing that loss function. So e.g. they can have some stupid goal like “maximize paperclips & build copies of self”, they’ll convert the universe to some mix of paperclips and AIs and experience extremely high satisfaction about it.
But you seem to be meaning something very different when you say “satisfaction”? Do you mind stating explicitly what it is?
Ah sorry, I had totally misunderstood your previous comment. (I had interpreted “multiply” very differently.) With that context, I retract my last response.
By “satisfaction” I meant high performance on its mesa-objective (insofar as it has one), though I suspect our different intuitions come from elsewhere.
I think I’m still skeptical on two points:
Whether this is significantly easier than other complex goals
(The “robustly” part seems hard.)
Whether this actually leads to a near-best outcome according to total preference utilitarianism
If satisfying some goals is cheaper than satisfying others to the same extent, then the details of the goal matter a lot
As a kind of silly example, “maximize silicon & build copies of self” might be much easier to satisfy than “maximize paperclips & build copies of self.” If so, a (total) preference utilitarian would consider it very important that agents have the former goal rather than the latter.
>By “satisfaction” I meant high performance on its mesa-objective
Yeah, I’d agree with this definition.
I don’t necessarily agree with your two points of skepticism, for the first one I’ve already mentioned my reasons, for the second one it’s true in principle but it seems almost anything an AI would learn semi-accidentally is going to be much simpler and more intrinsically consistent than human values. But low confidence on both and in any case that’s kind of beyond the point, I was mostly trying to understand your perspective on what utility is.