I include links to my two old posts arguing for keeping AI development:
First I argued that AI is a necesary (almost irrepleaceable) tool to deal with the other existential risks (mainly nuclear war):
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
Then, that currently AI risk is simply “too low to be measured”, and we need to be closer to AGI to develop realistic alignment work:
https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against
Current theme: default
Less Wrong (text)
Less Wrong (link)
I include links to my two old posts arguing for keeping AI development:
First I argued that AI is a necesary (almost irrepleaceable) tool to deal with the other existential risks (mainly nuclear war):
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
Then, that currently AI risk is simply “too low to be measured”, and we need to be closer to AGI to develop realistic alignment work:
https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against