Executive summary: Artificial wisdom systems that increase wisdom in both AI and humans could help mitigate existential risks from advanced AI by improving goal-setting and decision-making, though pursuing artificial wisdom also carries potential risks.
Key points:
Wisdom is defined as having good terminal and subgoals while avoiding large-scale errors; artificial wisdom (AW) refers to AI systems that increase wisdom.
Increasing wisdom in both AI and humans is crucial before the arrival of artificial superintelligence to improve decision-making and goal-setting.
Four scenarios are outlined based on combinations of AI alignment and artificial wisdom, with the best outcome being aligned AI with artificial wisdom.
Potential risks of pursuing AW include diverting resources from alignment research and creating a false sense of security.
Important areas for AW to focus on include existential risk strategy, crucial considerations in longtermism, and improving decision-making for key stakeholders.
The author plans to explore specific designs for artificial wisdom systems in future articles.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Artificial wisdom systems that increase wisdom in both AI and humans could help mitigate existential risks from advanced AI by improving goal-setting and decision-making, though pursuing artificial wisdom also carries potential risks.
Key points:
Wisdom is defined as having good terminal and subgoals while avoiding large-scale errors; artificial wisdom (AW) refers to AI systems that increase wisdom.
Increasing wisdom in both AI and humans is crucial before the arrival of artificial superintelligence to improve decision-making and goal-setting.
Four scenarios are outlined based on combinations of AI alignment and artificial wisdom, with the best outcome being aligned AI with artificial wisdom.
Potential risks of pursuing AW include diverting resources from alignment research and creating a false sense of security.
Important areas for AW to focus on include existential risk strategy, crucial considerations in longtermism, and improving decision-making for key stakeholders.
The author plans to explore specific designs for artificial wisdom systems in future articles.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.