Granted, moral outrage can sometimes be counterproductive.
However, we have no idea which specific ML work is ‘on the critical path to dangerous AI’. Maybe most of it isn’t. But maybe most of it is, one way or another.
ML researchers are clever enough to tell themselves reassuring stories about how whatever they’re working on is unlikely to lead straight to dangerous AI. Just as most scientists working on nuclear weapon systems during the Cold War could tell themselves stories like ‘Sure, I’m working on ICBM rockets, but at least I’m not working on ICBM guidance systesm’, or ‘Sure, I’m working on guidance systems, but at least I’m not working on the nuclear payloads’, or ‘Sure, I’m working on simulating the nuclear payload yields, but at least I’m not physically loading the enriched uranium into the warheads’. The smarter people are, the better they tend to be at motivated reasoning, and at creating plausible deniability that they played any role in increasing existential risk.
However, there’s no reason for the rest of us to trust individual ML researchers’ assessments of which work is dangerous, versus which is safe. Clearly a large proportion of ML researchers think that what other ML researchers are doing is potentially dangerous. And maybe we should listen to them about that.
I think a better analogy than “ICBM engineering” might be “all of aeronautical engineering and also some physicists studying fluid dynamics”. If you were an anti-nuclear protester and you went and yelled at an engineer who runs wind tunnel simulations to design cars, they would see this as strange and unfair. This is true even though there might be some dual use where aerodynamics simulations are also important for designing nuclear missiles.
I understand your point. But I think ‘dual use’ problems are likely to be very common in AI, just as human intelligence and creativity often have ‘dual use’ problems (e.g. Leonardo da Vinci creating beautiful art and also designing sadistic siege weapons).
Of course AI researchers, computer scientists, tech entrepreneurs, etc may see any strong regulations or moral stigma against their field as ‘strange and unfair’. So what? Given the global stakes, and given the reckless approach to AI development that they’ve taken so far, it’s not clear that EAs should give all that much weight to what they think. They do not have some inalienable right to develop technologies that are X risks to our species.
Our allegiance, IMHO, should be to humanity in general, sentient life in general, and our future descendants. Our allegiance should not be to the American tech industry—no matter how generous some of its leaders and investors have been to EA as a movement.
Granted, moral outrage can sometimes be counterproductive.
However, we have no idea which specific ML work is ‘on the critical path to dangerous AI’. Maybe most of it isn’t. But maybe most of it is, one way or another.
ML researchers are clever enough to tell themselves reassuring stories about how whatever they’re working on is unlikely to lead straight to dangerous AI. Just as most scientists working on nuclear weapon systems during the Cold War could tell themselves stories like ‘Sure, I’m working on ICBM rockets, but at least I’m not working on ICBM guidance systesm’, or ‘Sure, I’m working on guidance systems, but at least I’m not working on the nuclear payloads’, or ‘Sure, I’m working on simulating the nuclear payload yields, but at least I’m not physically loading the enriched uranium into the warheads’. The smarter people are, the better they tend to be at motivated reasoning, and at creating plausible deniability that they played any role in increasing existential risk.
However, there’s no reason for the rest of us to trust individual ML researchers’ assessments of which work is dangerous, versus which is safe. Clearly a large proportion of ML researchers think that what other ML researchers are doing is potentially dangerous. And maybe we should listen to them about that.
I think a better analogy than “ICBM engineering” might be “all of aeronautical engineering and also some physicists studying fluid dynamics”. If you were an anti-nuclear protester and you went and yelled at an engineer who runs wind tunnel simulations to design cars, they would see this as strange and unfair. This is true even though there might be some dual use where aerodynamics simulations are also important for designing nuclear missiles.
I understand your point. But I think ‘dual use’ problems are likely to be very common in AI, just as human intelligence and creativity often have ‘dual use’ problems (e.g. Leonardo da Vinci creating beautiful art and also designing sadistic siege weapons).
Of course AI researchers, computer scientists, tech entrepreneurs, etc may see any strong regulations or moral stigma against their field as ‘strange and unfair’. So what? Given the global stakes, and given the reckless approach to AI development that they’ve taken so far, it’s not clear that EAs should give all that much weight to what they think. They do not have some inalienable right to develop technologies that are X risks to our species.
Our allegiance, IMHO, should be to humanity in general, sentient life in general, and our future descendants. Our allegiance should not be to the American tech industry—no matter how generous some of its leaders and investors have been to EA as a movement.