It’s clear that not every constraint will work for every application, but I reckon every application will have at least some constraints that will drastically drop risk
I definitely agree that competitiveness is important, but remember that it’s not just about competitiveness for a specific task, but competitiveness at pleasing AI developers. There’s a large incentive for people not to build runaway murder machines! And even if a company doesn’t believe in Ai x-risk, it still has to worry about lawsuits, regulations etc for lesser accidents. I think the majority of developers can be persuaded or forced to put some constraints on, as long as they aren’t excessively onerous.
Maybe, I’m not sure though. Future applications that do long-term, large-scale planning seem hard to constrain much while still letting them do what they’re supposed to do. (Bounded goals—if they’re bounded to small-scale objectives—seem like they’d break large-scale planning, time limits seem like they’d break long-term planning, and as you mention the “don’t kill people” counter would be much trickier to implement.)
That’s a fair perspective. One last thing I’ll note is that even seemingly permissive constraints can make a huge difference from the perspective of the AI utility calculus. If I ask it to maximise paperclips, then the upper utility bound is defined by the amount of matter in the universe. Capping utility at a trillion paperclips doesn’t affect us much (too many would flood the market anyway), but it reduces the expected utility of an AI takeover by like 50 orders of magnitude. Putting in a time limit, even if it’s like 100 years, would have the same effect. Seems like a no-brainer.
It’s clear that not every constraint will work for every application, but I reckon every application will have at least some constraints that will drastically drop risk
I definitely agree that competitiveness is important, but remember that it’s not just about competitiveness for a specific task, but competitiveness at pleasing AI developers. There’s a large incentive for people not to build runaway murder machines! And even if a company doesn’t believe in Ai x-risk, it still has to worry about lawsuits, regulations etc for lesser accidents. I think the majority of developers can be persuaded or forced to put some constraints on, as long as they aren’t excessively onerous.
Maybe, I’m not sure though. Future applications that do long-term, large-scale planning seem hard to constrain much while still letting them do what they’re supposed to do. (Bounded goals—if they’re bounded to small-scale objectives—seem like they’d break large-scale planning, time limits seem like they’d break long-term planning, and as you mention the “don’t kill people” counter would be much trickier to implement.)
That’s a fair perspective. One last thing I’ll note is that even seemingly permissive constraints can make a huge difference from the perspective of the AI utility calculus. If I ask it to maximise paperclips, then the upper utility bound is defined by the amount of matter in the universe. Capping utility at a trillion paperclips doesn’t affect us much (too many would flood the market anyway), but it reduces the expected utility of an AI takeover by like 50 orders of magnitude. Putting in a time limit, even if it’s like 100 years, would have the same effect. Seems like a no-brainer.