Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
I think so, see here or here for a bit more discussion on this
If you think that AI will go pretty well by default (which I think many neartermists do)
My guess/âimpression is that this just hasnât been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a âlongtermistâ worldview)
This looks super interesting, thanks for posting! I especially appreciate the âHow to applyâ section
One thing Iâm interested in is seeing how this actually looks in practiceâspecifying real exogenous uncertainties (e.g. about timelines, takeoff speeds, etc), policy levers (e.g. these ideas, different AI safety research agendas, etc), relations (e.g. between AI labs, governments, etc) and performance metrics (e.g âp(doom)â, plus many of the sub-goals you outline). What are the conclusions? What would this imply about prioritization decisions? etc
I appreciate this would be super challenging, but if you are aware of any attempts to do it (even if using just a very basic, simplifying model), Iâd be curious to hear how itâs gone