I understand your point. But I think ‘dual use’ problems are likely to be very common in AI, just as human intelligence and creativity often have ‘dual use’ problems (e.g. Leonardo da Vinci creating beautiful art and also designing sadistic siege weapons).
Of course AI researchers, computer scientists, tech entrepreneurs, etc may see any strong regulations or moral stigma against their field as ‘strange and unfair’. So what? Given the global stakes, and given the reckless approach to AI development that they’ve taken so far, it’s not clear that EAs should give all that much weight to what they think. They do not have some inalienable right to develop technologies that are X risks to our species.
Our allegiance, IMHO, should be to humanity in general, sentient life in general, and our future descendants. Our allegiance should not be to the American tech industry—no matter how generous some of its leaders and investors have been to EA as a movement.
I understand your point. But I think ‘dual use’ problems are likely to be very common in AI, just as human intelligence and creativity often have ‘dual use’ problems (e.g. Leonardo da Vinci creating beautiful art and also designing sadistic siege weapons).
Of course AI researchers, computer scientists, tech entrepreneurs, etc may see any strong regulations or moral stigma against their field as ‘strange and unfair’. So what? Given the global stakes, and given the reckless approach to AI development that they’ve taken so far, it’s not clear that EAs should give all that much weight to what they think. They do not have some inalienable right to develop technologies that are X risks to our species.
Our allegiance, IMHO, should be to humanity in general, sentient life in general, and our future descendants. Our allegiance should not be to the American tech industry—no matter how generous some of its leaders and investors have been to EA as a movement.