Executive summary: The first human-level AI (HLAI) will likely be a scaled-up language model with tweaks and scaffolding, though some experts argue for more substantial architectural changes; misaligned AI takeover is the most commonly cited existential risk scenario, but inequality, institutional collapse, and gradual loss of human autonomy are also concerning possibilities.
Key points:
7 out of 17 experts expect HLAI to resemble current large language models (LLMs) with tweaks and additions, while 4 argue it will require more significant changes like improved learning efficiency, non-linguistic reasoning, or modular components.
Estimates for the arrival of HLAI range from 10 to 100+ years, with the median around 2040; the transition could be rapid once AI can automate AI research itself.
HLAI may be extremely capable at short-horizon tasks while still struggling with longer-term endeavors; it will likely automate most human labor while appearing very narrow in some respects.
Misaligned AI takeover is the most commonly cited existential catastrophe scenario, though some push back on its assumptions; other key risks include extreme inequality, breakdown of social institutions and trust, and a gradual loss of human agency.
Exist is highly uncertain; many experts emphasize the need for caution in the face of hard-to-predict dangers, rather than focusing on any single concrete scenario.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The first human-level AI (HLAI) will likely be a scaled-up language model with tweaks and scaffolding, though some experts argue for more substantial architectural changes; misaligned AI takeover is the most commonly cited existential risk scenario, but inequality, institutional collapse, and gradual loss of human autonomy are also concerning possibilities.
Key points:
7 out of 17 experts expect HLAI to resemble current large language models (LLMs) with tweaks and additions, while 4 argue it will require more significant changes like improved learning efficiency, non-linguistic reasoning, or modular components.
Estimates for the arrival of HLAI range from 10 to 100+ years, with the median around 2040; the transition could be rapid once AI can automate AI research itself.
HLAI may be extremely capable at short-horizon tasks while still struggling with longer-term endeavors; it will likely automate most human labor while appearing very narrow in some respects.
Misaligned AI takeover is the most commonly cited existential catastrophe scenario, though some push back on its assumptions; other key risks include extreme inequality, breakdown of social institutions and trust, and a gradual loss of human agency.
Exist is highly uncertain; many experts emphasize the need for caution in the face of hard-to-predict dangers, rather than focusing on any single concrete scenario.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.