Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
You may have also seen Sam Clarke’s classification of AI x-risk sources, just sharing for others :)
Wei Dai and Daniel Kokotajlo’s older longlist might worth perusing too?
Yes; it could be useful if Stephen briefly explained how his classification relates to other classifications. (And which advantages it has—I guess simplicity is one.)
This is a very informative article. The three stage model simplifies the risks for better understanding of what would happen. The alignment stage is a very crucial stage in AI development and deployment. All the risks in all the three stages are equally very catastrophic. We could never be ‘ready for what is to come’ but we could surely curb it at the alignment stage!
Thank you for the piece!
Executive summary: AI development involves risks at three key stages: training (misalignment), deployment (misuse), and diffusion (systemic issues). Competitive pressures exacerbate risks across all stages.
Key points:
Training risks involve AI goals misaligning from human values, causing issues when systems are deployed.
Deployment risks involve humans intentionally misusing powerful AI systems for harm.
Diffusion risks involve AI diffusing through the economy and causing unintentional systemic issues like loss of human control.
Competitive pressures make actors more likely to cut corners on safety at each stage.
Risks across stages could manifest simultaneously in complex ways.
Understanding this framework helps integrate different AI risk proposals into a cohesive whole.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.