[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.
How so? I hadn’t gotten this sense. Certainly we still do lots of them internally at Open Phil.
Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn’t been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely “meta” things like creating knowledge and talent pipelines.
Some quick thoughts: I would guess that Open Phil is better at this than other EA orgs, both because of individually more competent people and much better institutional incentives (ego not wedded to specific projects working). For your specific example, I’m (as you know) new to AI governance, but I would naively guess that most (including competence-weighted) people in AI governance are more positive about AI interventions than you are.
Happy to be corrected empirically.
(I also agree with Larks that publishing a subset of these may be good for improving the public conversation/training in EA, but I understand if this is too costly and/or if the internal analyses embed too much sensitive information or models)
Except this comes all the way from 2011 so can’t really be used to strongly argue EA has recently moved away from explicit EV calculations. It looks more likely that strong skepticism of explicit EV calculations has been a feature of the EA community since its inception.
How so? I hadn’t gotten this sense. Certainly we still do lots of them internally at Open Phil.
Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn’t been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely “meta” things like creating knowledge and talent pipelines.
It might be helpful if you published some more of these to set a good example.
They do discuss some of these and have published a few here, though I agree it would be cool to see some for longtermism (the sample BOTECs are for global health and wellbeing work).
I guess no one is really publishing these CEAs, then?
Do you also have CEAs of the meta work you fund, in terms of AI risk reduction/increase?
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?
Some quick thoughts: I would guess that Open Phil is better at this than other EA orgs, both because of individually more competent people and much better institutional incentives (ego not wedded to specific projects working). For your specific example, I’m (as you know) new to AI governance, but I would naively guess that most (including competence-weighted) people in AI governance are more positive about AI interventions than you are.
Happy to be corrected empirically.
(I also agree with Larks that publishing a subset of these may be good for improving the public conversation/training in EA, but I understand if this is too costly and/or if the internal analyses embed too much sensitive information or models)
I immediately thought of GiveWell’s Why We Can’t take EV Calculations Literally Even When They’re Unbiased: https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
Except this comes all the way from 2011 so can’t really be used to strongly argue EA has recently moved away from explicit EV calculations. It looks more likely that strong skepticism of explicit EV calculations has been a feature of the EA community since its inception.