I personally am not excited about making these kinds of grants and think there are now much more cost-effective opportunities within AI safety (in part because of progress reports on these kinds of “speculative advocacy” grants rarely panning out though we haven’t made many of them). I’ll nudge the primary investigator to see if they want to explain their reasoning for this grant here (if it’s not already in a payout report).
I agree that we should have more public retrospective evaluation of our grants. We commissioned some work in this space but were unhappy with the results and I am trying to figure out better solutions, but I suspect that it should be a priority for the next half of the year. I don’t expect that grants if this nature to feature prominently in that work (as we do relatively little advocacy, and the grants we have made that are notable are, imo very unfortunately, private—but by dollars amount it’s very low).
I think the broad areas for the ltff which would be valuable to retrospectively evaluate are:
technical ai safety work (broken down by area or ai worldview)
ai governance
biosecurity research
general longtermist research
longtermist comms (e.g. this grant and rob miles videos)
field building (broken down by field)
general gcr/ea community building
It’s also worth noting that this grant was made in a time when AIS/EA had a lot more philanthropic capital so the funding bar was much lower (and in my opinion there were way fewer promising projects to fund). Maybe we should indicate that in the public grants database?
I think that this paragraph seems more important than the individual case highlighted in this article:
However, I found numerous other cases, many even worse, primarily involving digital creators with barely any content produced during their funding period, people needing big financial support to change careers, and independent researchers whose proposals had not, at the time of writing, resulted in any published papers.
If you made a record of these grants would you be interested in sharing them with me? I’d like to check them against our internal progress reports, I think if there’s a large mismatch in opinion on the success of grants we should potentially put more effort into communicating why we think the grants are valuable.
I personally am not excited about making these kinds of grants and think there are now much more cost-effective opportunities within AI safety (in part because of progress reports on these kinds of “speculative advocacy” grants rarely panning out though we haven’t made many of them). I’ll nudge the primary investigator to see if they want to explain their reasoning for this grant here (if it’s not already in a payout report).
I agree that we should have more public retrospective evaluation of our grants. We commissioned some work in this space but were unhappy with the results and I am trying to figure out better solutions, but I suspect that it should be a priority for the next half of the year. I don’t expect that grants if this nature to feature prominently in that work (as we do relatively little advocacy, and the grants we have made that are notable are, imo very unfortunately, private—but by dollars amount it’s very low).
I think the broad areas for the ltff which would be valuable to retrospectively evaluate are:
technical ai safety work (broken down by area or ai worldview)
ai governance
biosecurity research
general longtermist research
longtermist comms (e.g. this grant and rob miles videos)
field building (broken down by field)
general gcr/ea community building
It’s also worth noting that this grant was made in a time when AIS/EA had a lot more philanthropic capital so the funding bar was much lower (and in my opinion there were way fewer promising projects to fund). Maybe we should indicate that in the public grants database?
I think that this paragraph seems more important than the individual case highlighted in this article:
If you made a record of these grants would you be interested in sharing them with me? I’d like to check them against our internal progress reports, I think if there’s a large mismatch in opinion on the success of grants we should potentially put more effort into communicating why we think the grants are valuable.