Executive summary: AI capabilities organizations like OpenAI and DeepMind should immediately halt progress and shut down because they do not know how to build powerful AI systems that are safe and beneficial.
Key points:
Current AI capabilities organizations are progressing quickly on building powerful AI without knowing how to ensure it is safe and beneficial. This risks catastrophe if deployed.
Aligning AI goals with human values is extremely difficult. Current organizations are not prioritizing this properly and are likely to deploy unsafe systems.
Shutting down and halting progress is the only way to prevent uncontrolled development of AI systems that could cause mass harm.
Even small contributions to capabilities over safety increase the risk of catastrophe. Researchers and engineers at these organizations should quit or slow progress substantially.
Related organizations contributing tools and services to unsafe AI development should also shut down.
No current organization has shown the capability to properly build safe advanced AI before uncontrolled AI emerges.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: AI capabilities organizations like OpenAI and DeepMind should immediately halt progress and shut down because they do not know how to build powerful AI systems that are safe and beneficial.
Key points:
Current AI capabilities organizations are progressing quickly on building powerful AI without knowing how to ensure it is safe and beneficial. This risks catastrophe if deployed.
Aligning AI goals with human values is extremely difficult. Current organizations are not prioritizing this properly and are likely to deploy unsafe systems.
Shutting down and halting progress is the only way to prevent uncontrolled development of AI systems that could cause mass harm.
Even small contributions to capabilities over safety increase the risk of catastrophe. Researchers and engineers at these organizations should quit or slow progress substantially.
Related organizations contributing tools and services to unsafe AI development should also shut down.
No current organization has shown the capability to properly build safe advanced AI before uncontrolled AI emerges.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.