Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?