Here’s an updated ipynb with OpenPhil’s annual spending, showing the breakdown with respect to EA-relevant areas.
My main impressions:
Having Ben Delo’s participation is great.
OpenPhil and its staff working hard on allocating these funds is absolutely great (it’s obvious, yet worth saying over and over again.)
It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
The AI OpenPhil Scholarships place substantial weight on the excellence of applicants’ supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done—I’ve only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I’ve heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I’m as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven’t really heard anyone arguing the opposing view—it would be interesting to understand this thinking better.
Hi Ryan—in terms of the Fellowship, I have a lot of thoughts about what we’re trying to do, which feel better suited to “musing, with uncertainty” than “writing an internet comment”, so let me know if you want to call/chat about it some time? But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly.
Hey Catherio, sure, I’ve been puzzled by this for long enough that I’ll probably reach out for a call.
Community effects could still be mediated by the relevance of participants’ research interests. Anyway, I’m also pretty uncertain and interested to see the results as they come in over the coming years.
This should cause their overall portfolio of AI scholarships to place more weight to the relevance of research done, which seems like an improvement to me.
Here’s an updated ipynb with OpenPhil’s annual spending, showing the breakdown with respect to EA-relevant areas.
My main impressions:
Having Ben Delo’s participation is great.
OpenPhil and its staff working hard on allocating these funds is absolutely great (it’s obvious, yet worth saying over and over again.)
It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
The AI OpenPhil Scholarships place substantial weight on the excellence of applicants’ supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done—I’ve only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I’ve heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I’m as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven’t really heard anyone arguing the opposing view—it would be interesting to understand this thinking better.
Hi Ryan—in terms of the Fellowship, I have a lot of thoughts about what we’re trying to do, which feel better suited to “musing, with uncertainty” than “writing an internet comment”, so let me know if you want to call/chat about it some time? But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly.
Hey Catherio, sure, I’ve been puzzled by this for long enough that I’ll probably reach out for a call.
Community effects could still be mediated by the relevance of participants’ research interests. Anyway, I’m also pretty uncertain and interested to see the results as they come in over the coming years.
Have you guys ended up doing this call? If so, do you feel like you have a (compressed) understanding and/or agreement with OpenPhil’s position here?
We didn’t do any call yet!
OpenPhil has introduced early career funding for people who are interested in the long-term future, including AI safety here: https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future
This should cause their overall portfolio of AI scholarships to place more weight to the relevance of research done, which seems like an improvement to me.