We hit some hard physical limit on computation which dramatically slows the relevant variants of Moore’s law.
A major world power wakes up and started seriously focusing on AI safety as a top priority.
We build much better infrastructure for scaling the research field (e.g. AI tools to automatically connect people with relevant research, and help accelerate learning with contextual knowledge of what a person has read and is interested in, likely in the form of a unified feed (funding offers welcome!)). Apart’s AI Safety Ideas also falls into the category of key infrastructure.
Direct progress on alignment, some research paradigm emerges which seems likely to result in a real solution.
Dramatic slow in the rate of capabilities breakthroughs, discovering some crucial category of task which needs fundamental breakthroughs which are not quick to be achieved.
More SBFs entering the funding ecosystem.
and on the bad side
Lots of capabilities breakthroughs.
Stagnation or unhealthy dynamics in the alignment research space (e.g. vultures, loss of collective sensemaking as we scale, self-promotion becoming the winning strategy).
US-China race dynamics, especially both countries explicitly pushing for AGI without really good safety considerations.
on the good side
We hit some hard physical limit on computation which dramatically slows the relevant variants of Moore’s law.
A major world power wakes up and started seriously focusing on AI safety as a top priority.
We build much better infrastructure for scaling the research field (e.g. AI tools to automatically connect people with relevant research, and help accelerate learning with contextual knowledge of what a person has read and is interested in, likely in the form of a unified feed (funding offers welcome!)). Apart’s AI Safety Ideas also falls into the category of key infrastructure.
Direct progress on alignment, some research paradigm emerges which seems likely to result in a real solution.
Dramatic slow in the rate of capabilities breakthroughs, discovering some crucial category of task which needs fundamental breakthroughs which are not quick to be achieved.
More SBFs entering the funding ecosystem.
and on the bad side
Lots of capabilities breakthroughs.
Stagnation or unhealthy dynamics in the alignment research space (e.g. vultures, loss of collective sensemaking as we scale, self-promotion becoming the winning strategy).
US-China race dynamics, especially both countries explicitly pushing for AGI without really good safety considerations.
Funding crunch.