Maybe, I’m not sure though. Future applications that do long-term, large-scale planning seem hard to constrain much while still letting them do what they’re supposed to do. (Bounded goals—if they’re bounded to small-scale objectives—seem like they’d break large-scale planning, time limits seem like they’d break long-term planning, and as you mention the “don’t kill people” counter would be much trickier to implement.)
That’s a fair perspective. One last thing I’ll note is that even seemingly permissive constraints can make a huge difference from the perspective of the AI utility calculus. If I ask it to maximise paperclips, then the upper utility bound is defined by the amount of matter in the universe. Capping utility at a trillion paperclips doesn’t affect us much (too many would flood the market anyway), but it reduces the expected utility of an AI takeover by like 50 orders of magnitude. Putting in a time limit, even if it’s like 100 years, would have the same effect. Seems like a no-brainer.
Maybe, I’m not sure though. Future applications that do long-term, large-scale planning seem hard to constrain much while still letting them do what they’re supposed to do. (Bounded goals—if they’re bounded to small-scale objectives—seem like they’d break large-scale planning, time limits seem like they’d break long-term planning, and as you mention the “don’t kill people” counter would be much trickier to implement.)
That’s a fair perspective. One last thing I’ll note is that even seemingly permissive constraints can make a huge difference from the perspective of the AI utility calculus. If I ask it to maximise paperclips, then the upper utility bound is defined by the amount of matter in the universe. Capping utility at a trillion paperclips doesn’t affect us much (too many would flood the market anyway), but it reduces the expected utility of an AI takeover by like 50 orders of magnitude. Putting in a time limit, even if it’s like 100 years, would have the same effect. Seems like a no-brainer.