How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks?

This was origi­nally posted as a com­ment on an old thread. How­ever, I think the topic is im­por­tant enough to de­serve a dis­cus­sion of its own. I would be very in­ter­ested in hear­ing your opinion on this mat­ter. I am an aca­demic work­ing in the field of philos­o­phy of sci­ence, and I am in­ter­ested in the crite­ria used by fund­ing in­sti­tu­tions to al­lo­cate their funds to re­search pro­jects.

A re­cent trend of pro­vid­ing rel­a­tively high re­search grants (rel­a­tive to some of the most pres­ti­gious re­search grants across EU, such as for in­stance ERC start­ing grants ~ 1.5 mil EUR) to pro­jects on AI risks and safety made me cu­ri­ous, and so I looked a bit more into this topic. What struck me as es­pe­cially cu­ri­ous is the lack of trans­parency when it comes to the crite­ria used to eval­u­ate the pro­jects and to de­cide how to al­lo­cate the funds.

Now, for the sake of this ar­ti­cle, I will as­sume that the re­search topic of AI risks and safety is im­por­tant and should be funded (to which ex­tent it ac­tu­ally is, is beside the point and de­serves a dis­cus­sion of its own; so let’s just say it is among the most pur­suit-wor­thy prob­lems in view of both epistemic and non-epistemic crite­ria).

Par­tic­u­larly sur­pris­ing was a sud­den grant of 3.75 mil USD by Open Philan­ropy Pro­ject (OPP) to MIRI. Note that the fund­ing is more than dou­ble the amount given to ERC start­ing grantees. Pre­vi­ously, OPP awarded MIRI with 500.000 USD and pro­vided an ex­ten­sive ex­pla­na­tion of this de­ci­sion. So, one would ex­pect that for a grant more than 7 times higher, we’d find at least as much. But what we do find is an ex­tremely brief ex­pla­na­tion say­ing that an anony­mous ex­pert re­viewer has eval­u­ated MIRI’s work as highly promis­ing in view of their pa­per “Log­i­cal In­duc­tion”.

Note that in the last 2 years since I first saw this pa­per on­line, the very same pa­per has not been pub­lished in any peer-re­viewed jour­nal. More­over, if you check MIRI’s pub­li­ca­tions you find not a sin­gle jour­nal ar­ti­cle since 2015 (or an ar­ti­cle pub­lished in pres­ti­gious AI con­fer­ence pro­ceed­ings, for that mat­ter -- *cor­rec­tion:* there are five pa­pers pub­lished as con­fer­ence pro­ceed­ings in 2016, some of which seem to be tech­ni­cal re­ports, rather than ac­tual pub­li­ca­tions, so I am not sure how their qual­ity should be as­sessed; I see no such pro­ceed­ings pub­li­ca­tions in 2017). It suffices to say that I was sur­prised. So I de­cided to con­tact both MIRI ask­ing if per­haps their pub­li­ca­tions haven’t been up­dated on their web­site, and OPP ask­ing for the eval­u­a­tive crite­ria used when award­ing this grant.

MIRI has never replied (email sent on Fe­bru­ary 8). OPP took a while to re­ply, and last week I re­ceived the fol­low­ing email:

“Hi Dunja,

Thanks for your pa­tience. Our as­sess­ment of this grant was based largely on the ex­pert re­viewer’s rea­son­ing in re­view­ing MIRI’s work. Un­for­tu­nately, we don’t have per­mis­sion to share the re­viewer’s iden­tity or rea­son­ing. I’m sorry not to be more helpful with this, and do wish you the best of luck with your re­search.

Best,

[name blinded in this pub­lic post; I ex­plained in my email that my ques­tion was mo­ti­vated by my re­search topic]”

All this is very sur­pris­ing given that OPP prides it­self on trans­parency. As stated on their web­site:

“We work hard to make it easy for new philan­thropists and out­siders to learn about our work. We do that by:

  • Blog­ging about ma­jor de­ci­sions and the rea­son­ing be­hind them, as well as what we’re learn­ing about how to be an effec­tive fun­der.

  • Creat­ing de­tailed re­ports on the causes we’re in­ves­ti­gat­ing.

  • Shar­ing notes from our in­for­ma­tion-gath­er­ing con­ver­sa­tions.

  • Pub­lish­ing write­ups and up­dates on a num­ber of our grants, in­clud­ing our rea­son­ing and reser­va­tions be­fore mak­ing a grant, and any set­backs and challenges we en­counter.” (em­pha­sis added)

How­ever, the main prob­lem here is not the mere lack of trans­parency, but the lack of effec­tive and effi­cient fund­ing policy.

The ques­tion, how to de­cide which pro­jects to fund in or­der to achieve effec­tive and effi­cient knowl­edge ac­qui­si­tion has been re­searched within philos­o­phy of sci­ence and sci­ence policy for decades now. Yet, some of the ba­sic crite­ria seem ab­sent from cases such as the above men­tioned one. For in­stance, es­tab­lish­ing that the given re­search pro­ject is wor­thy of pur­suit can­not be done merely in view of the pur­suit-wor­thi­ness of the re­search topic. In­stead, the pro­ject has to show a vi­able method­ol­ogy and ob­jec­tives, which have been as­sessed as apt for the given task by a panel of ex­perts in the given do­main (rather than by a sin­gle ex­pert re­viewer). Next, the pro­ject ini­tia­tor has to show ex­per­tise in the given do­main (where one’s pub­li­ca­tion record is an im­por­tant crite­rion). Fi­nally, if the fund­ing agency has a cer­tain topic in mind, it is much more effec­tive to make an open call for pro­ject sub­mis­sions, where the ex­pert panel se­lects the most promis­ing one(s).

This is not to say that young schol­ars, or sim­ply schol­ars with­out an im­pres­sive track record wouldn’t be able to pur­sue the given pro­ject. How­ever, the im­por­tant ques­tion here is not “Who could pur­sue this pro­ject?” but “Who could pur­sue this pro­ject in the most effec­tive and effi­cient way?”.

To sum up: trans­par­ent mark­ers of re­li­a­bil­ity, over the course of re­search, are ex­tremely im­por­tant if we want to ad­vance effec­tive and effi­cient re­search. The panel of ex­perts (rather than a sin­gle ex­pert) is ex­tremely im­por­tant in as­sur­ing pro­ce­du­ral ob­jec­tivity of the given as­sess­ment.

Al­to­gether, this is not just sur­pris­ing, but dis­turb­ing. Per­haps the biggest dan­ger is that this falls into the hands of press and ends up be­ing an ar­gu­ment for the point that or­ga­ni­za­tions close to effec­tive al­tru­ism are not effec­tive at all.