This seems to complement @nostalgebraist’s complaint that much of work on AI timelines (Bio Anchors, AI 2027) rely on a few load-bearing assumptions (e.g. the permanence of Moore’s law, the possibility of software intelligence explosion) and then doing a lot of work crunching statistics and Fermi estimations to “predict” an AGI date, when really the end result is overdetermined by those beginning assumptions and not affected very much by changing the secondary estimations. It is thus largely a waste of time to focus on improving those estimations when there is a lot more research to be done on the actual load-bearing assumptions:
Is Moore’s law going to continue indefinitely?
Is software intelligence explosion plausible? (If yes, does it require concentration of compute?)
Is technical alignment easy?
...
Which are the actual cruxes for the most controversial AI governance questions like:
How much should we worry about regulatory capture?
Is it more important to reduce the rate of capabilities growth or for the US to beat China?
Should base models be open-sourced?
How much can friction when interacting with the real world (e.g. time needed to build factories and perform experiments (poke @titotal), regulatory red tape, labor unions, etc.) prevent AGI?
How continuous are “short-term” AI ethics efforts (FAccT, technological unemployment, military uses) with “long-term” AI safety?
How important is it to enhance collaboration between US, European and Chinese safety organizations?
Should EAs work with, for, or against frontier AI labs?
This seems to complement @nostalgebraist’s complaint that much of work on AI timelines (Bio Anchors, AI 2027) rely on a few load-bearing assumptions (e.g. the permanence of Moore’s law, the possibility of software intelligence explosion) and then doing a lot of work crunching statistics and Fermi estimations to “predict” an AGI date, when really the end result is overdetermined by those beginning assumptions and not affected very much by changing the secondary estimations. It is thus largely a waste of time to focus on improving those estimations when there is a lot more research to be done on the actual load-bearing assumptions:
Is Moore’s law going to continue indefinitely?
Is software intelligence explosion plausible? (If yes, does it require concentration of compute?)
Is technical alignment easy?
...
Which are the actual cruxes for the most controversial AI governance questions like:
How much should we worry about regulatory capture?
Is it more important to reduce the rate of capabilities growth or for the US to beat China?
Should base models be open-sourced?
How much can friction when interacting with the real world (e.g. time needed to build factories and perform experiments (poke @titotal), regulatory red tape, labor unions, etc.) prevent AGI?
How continuous are “short-term” AI ethics efforts (FAccT, technological unemployment, military uses) with “long-term” AI safety?
How important is it to enhance collaboration between US, European and Chinese safety organizations?
Should EAs work with, for, or against frontier AI labs?
...