I suppose an interesting exercise for another research project could be to try to tally up in hindsight how many activist/​protest movements seem directionally correct or mistaken in retrospect (eg anti-GM seems wrong, anti-nuclear seems wrong, anti-fossil-fuels seems right). I think even if the data came in that activists are usually wrong this wouldn’t actually move me very much as the inside view arguments are quite strong for AI risk I think.
Sounds interesting Oscar, though I wonder what reference class you’d use … all protests? A unique feature of AI protests is that many AI researchers are themselves protesting. If we are comparing groups on epistemics, the Bulletin of the Atomic Scientists (founded by Manhattan Project scientists) might be a closer comparison than GM protestors (who were led by Greenpeace, farmers etc., not people working in biotech). I also agree that considering inside-view arguments about AI risk are important.
I suppose an interesting exercise for another research project could be to try to tally up in hindsight how many activist/​protest movements seem directionally correct or mistaken in retrospect (eg anti-GM seems wrong, anti-nuclear seems wrong, anti-fossil-fuels seems right). I think even if the data came in that activists are usually wrong this wouldn’t actually move me very much as the inside view arguments are quite strong for AI risk I think.
Sounds interesting Oscar, though I wonder what reference class you’d use … all protests? A unique feature of AI protests is that many AI researchers are themselves protesting. If we are comparing groups on epistemics, the Bulletin of the Atomic Scientists (founded by Manhattan Project scientists) might be a closer comparison than GM protestors (who were led by Greenpeace, farmers etc., not people working in biotech). I also agree that considering inside-view arguments about AI risk are important.