Ah sorry, I had totally misunderstood your previous comment. (I had interpreted “multiply” very differently.) With that context, I retract my last response.
By “satisfaction” I meant high performance on its mesa-objective (insofar as it has one), though I suspect our different intuitions come from elsewhere.
it should robustly include “building copy of itself”
I think I’m still skeptical on two points:
Whether this is significantly easier than other complex goals
(The “robustly” part seems hard.)
Whether this actually leads to a near-best outcome according to total preference utilitarianism
If satisfying some goals is cheaper than satisfying others to the same extent, then the details of the goal matter a lot
As a kind of silly example, “maximize silicon & build copies of self” might be much easier to satisfy than “maximize paperclips & build copies of self.” If so, a (total) preference utilitarian would consider it very important that agents have the former goal rather than the latter.
>By “satisfaction” I meant high performance on its mesa-objective
Yeah, I’d agree with this definition.
I don’t necessarily agree with your two points of skepticism, for the first one I’ve already mentioned my reasons, for the second one it’s true in principle but it seems almost anything an AI would learn semi-accidentally is going to be much simpler and more intrinsically consistent than human values. But low confidence on both and in any case that’s kind of beyond the point, I was mostly trying to understand your perspective on what utility is.
Ah sorry, I had totally misunderstood your previous comment. (I had interpreted “multiply” very differently.) With that context, I retract my last response.
By “satisfaction” I meant high performance on its mesa-objective (insofar as it has one), though I suspect our different intuitions come from elsewhere.
I think I’m still skeptical on two points:
Whether this is significantly easier than other complex goals
(The “robustly” part seems hard.)
Whether this actually leads to a near-best outcome according to total preference utilitarianism
If satisfying some goals is cheaper than satisfying others to the same extent, then the details of the goal matter a lot
As a kind of silly example, “maximize silicon & build copies of self” might be much easier to satisfy than “maximize paperclips & build copies of self.” If so, a (total) preference utilitarian would consider it very important that agents have the former goal rather than the latter.
>By “satisfaction” I meant high performance on its mesa-objective
Yeah, I’d agree with this definition.
I don’t necessarily agree with your two points of skepticism, for the first one I’ve already mentioned my reasons, for the second one it’s true in principle but it seems almost anything an AI would learn semi-accidentally is going to be much simpler and more intrinsically consistent than human values. But low confidence on both and in any case that’s kind of beyond the point, I was mostly trying to understand your perspective on what utility is.