For as long as it’s existed the “AI safety” movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can’t act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions.
I have no idea what you are advocating for here. I have no inherent interest in trying to convince people that AGI is likely powerful, but it does seem likely true. Should I lie to people?
Many have chosen the path of keeping their beliefs to themselves. My guess is that wasn’t very helpful as the “imminent and powerful” part is kind of obvious as it starts happening.
What is the predictable result here? What is the counterfactual? How does anything better happen if you don’t say anything, and why are you falsely claiming that it’s been consensus that it’s a good idea to publicly talk about the power and capabilities of AI systems? A substantial fraction of the AI safety movement did not do this, and indeed strongly advocated against (again, I think mistakenly), so even if you assign blame, you obviously can’t assign blame uniformly.
For as long as it’s existed the “AI safety” movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can’t act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions.
I have no idea what you are advocating for here. I have no inherent interest in trying to convince people that AGI is likely powerful, but it does seem likely true. Should I lie to people?
Many have chosen the path of keeping their beliefs to themselves. My guess is that wasn’t very helpful as the “imminent and powerful” part is kind of obvious as it starts happening.
What is the predictable result here? What is the counterfactual? How does anything better happen if you don’t say anything, and why are you falsely claiming that it’s been consensus that it’s a good idea to publicly talk about the power and capabilities of AI systems? A substantial fraction of the AI safety movement did not do this, and indeed strongly advocated against (again, I think mistakenly), so even if you assign blame, you obviously can’t assign blame uniformly.