Executive summary: The author believes there may be an opportunity for principled, ethical mass outreach to raise awareness about existential risks from advanced AI systems. This could involve appealing to non-voters’ epistemic asymmetry and bounded rationality. However, extreme care is still advised given the high stakes.
Key points:
Many non-voters don’t participate due to feeling uninformed or that their vote doesn’t matter. The author hypothesizes some may feel similarly about advanced AI.
There may be an opening to ethically communicate the stakes and asymmetry involved with advanced AI to such groups. This could encourage broader societal deliberation without necessitating technical expertise.
Any mass outreach faces severe challenges and risks, so proposals must be extremely careful, robust, and lead with the right framing. Going viral amplifies any issues.
The author welcomes suggestions for better ideas about mass outreach on existential risk, provided they meet high evidentiary standards. Most alignment researchers wrongly dismiss such efforts as inevitably unethical or ineffective.
The author admits likely gaps in their own logic and invites critical feedback, especially for anyone attempting real-world campaigns. Misstep risks could be catastrophic.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author believes there may be an opportunity for principled, ethical mass outreach to raise awareness about existential risks from advanced AI systems. This could involve appealing to non-voters’ epistemic asymmetry and bounded rationality. However, extreme care is still advised given the high stakes.
Key points:
Many non-voters don’t participate due to feeling uninformed or that their vote doesn’t matter. The author hypothesizes some may feel similarly about advanced AI.
There may be an opening to ethically communicate the stakes and asymmetry involved with advanced AI to such groups. This could encourage broader societal deliberation without necessitating technical expertise.
Any mass outreach faces severe challenges and risks, so proposals must be extremely careful, robust, and lead with the right framing. Going viral amplifies any issues.
The author welcomes suggestions for better ideas about mass outreach on existential risk, provided they meet high evidentiary standards. Most alignment researchers wrongly dismiss such efforts as inevitably unethical or ineffective.
The author admits likely gaps in their own logic and invites critical feedback, especially for anyone attempting real-world campaigns. Misstep risks could be catastrophic.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I’d phrase them, but maybe that’s to be expected(?)