As a Stanford CS (BS/MS ’10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.
Maybe I have “easily corrected, false beliefs” but the people I’ve talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.
My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I’ll grant that that’s possible, but that’s a human problem, not an AI problem, and requires a human solution.
You don’t try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.
But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.
You don’t try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.
The difficulty of the policy problem depends on the quality of our technical solutions: how large an advantage can you get by behaving unsafely? If the answer is “you get big advantages for sacrificing safety, and a small group behaving unsafely could cause a big problem” then we have put ourselves in a sticky situation and will need to conjure up some unusually effective international coordination.
A perfect technical solution would make the policy problem relatively easy—if we had a scalable+competitive+secure solution to AI control, then there would be minimal risk from reckless actors. On the flip side, a perfect policy solution would make the technical problem relatively easy since we could just collectively decide not to build any kind of AI that could cause trouble. In reality we are probably going to need both.
You could hold the position that the advantages from building uncontrolled AI will predictably be very low even without any further work. I disagree strongly with that and think that it contradicts the balance of public argument, though I don’t know if I’d call it “easily corrected.”
I’m also very interested in hearing you elaborate a bit.
I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.
RE: “MIRI has focused in on an extremely specific kind of AI”, I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I’m less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights in real-world resource-limited agents). But I’m also curious what you think MIRIs research is focusing on vs. ignoring.
I also would not equate technical AIS with MIRI’s research.
Is it necessary to be convinced? I think the argument for AIS as a priority is strong so long as the concerns have some validity to them, and cannot be dismissed out of hand.
Well you’re not the kind of person I had in mind. What I see is more of a mix of basic mistakes regarding the technical arguments and downright defamation of relevant people and institutions.
Evaluating whether the MIRI technical agenda is relevant to AI seems pretty thorny and subjective, and perhaps not something that people without graduate-level study can do.
One thing that people can contribute when they find people like you is to figure out the precise reasons for disagreement and document/aggregate them so that they can be reviewed and considered.
As a Stanford CS (BS/MS ’10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.
Maybe I have “easily corrected, false beliefs” but the people I’ve talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.
My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I’ll grant that that’s possible, but that’s a human problem, not an AI problem, and requires a human solution.
You don’t try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.
But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.
The difficulty of the policy problem depends on the quality of our technical solutions: how large an advantage can you get by behaving unsafely? If the answer is “you get big advantages for sacrificing safety, and a small group behaving unsafely could cause a big problem” then we have put ourselves in a sticky situation and will need to conjure up some unusually effective international coordination.
A perfect technical solution would make the policy problem relatively easy—if we had a scalable+competitive+secure solution to AI control, then there would be minimal risk from reckless actors. On the flip side, a perfect policy solution would make the technical problem relatively easy since we could just collectively decide not to build any kind of AI that could cause trouble. In reality we are probably going to need both.
(I wrote about this here.)
You could hold the position that the advantages from building uncontrolled AI will predictably be very low even without any further work. I disagree strongly with that and think that it contradicts the balance of public argument, though I don’t know if I’d call it “easily corrected.”
I’m also very interested in hearing you elaborate a bit.
I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.
RE: “MIRI has focused in on an extremely specific kind of AI”, I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I’m less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights in real-world resource-limited agents). But I’m also curious what you think MIRIs research is focusing on vs. ignoring.
I also would not equate technical AIS with MIRI’s research.
Is it necessary to be convinced? I think the argument for AIS as a priority is strong so long as the concerns have some validity to them, and cannot be dismissed out of hand.
I’d be interested to read you elaborate more on your views, for what it’s worth.
Well you’re not the kind of person I had in mind. What I see is more of a mix of basic mistakes regarding the technical arguments and downright defamation of relevant people and institutions.
Evaluating whether the MIRI technical agenda is relevant to AI seems pretty thorny and subjective, and perhaps not something that people without graduate-level study can do.
One thing that people can contribute when they find people like you is to figure out the precise reasons for disagreement and document/aggregate them so that they can be reviewed and considered.