Thanks for the comments! I didn’t want to put estimates on the likelihood of each scenario, just to point out that they make more sense than a traditional paperclipper scenario. The chance of EA ending the world is extremely low, but if you consider who might have the means, motive, and opportunity to carry out such a task, I think EAers are surprisingly high up the list, after national government and greedy corporations.
I don’t feel qualified to speculate too much about future AI models or LLM’s vs RL. None of the current models have shown any indication of fanaticism, so there doesn’t seem to be much of a reason for that to change just by pumping more computing power into them.
Thanks for the comments! I didn’t want to put estimates on the likelihood of each scenario, just to point out that they make more sense than a traditional paperclipper scenario. The chance of EA ending the world is extremely low, but if you consider who might have the means, motive, and opportunity to carry out such a task, I think EAers are surprisingly high up the list, after national government and greedy corporations.
I don’t feel qualified to speculate too much about future AI models or LLM’s vs RL. None of the current models have shown any indication of fanaticism, so there doesn’t seem to be much of a reason for that to change just by pumping more computing power into them.