Okay, giving entirely my own professional view as I see it, absolutely not speaking for anybody else or the fund writ large:
There are several organizations that work on helping non-humans in the long-term future[...]; do you think that their activities could be competitive with the typical grant applications that LTFF gets?
To be honest, I’m not entirely sure what most of these organizations actually do research on, on a day-to-day basis. Here are some examples of what I understand to be the one-sentence pitch for many of these projects
figure out models of digital sentience
research on cooperation in large worlds
how to design AIs to reduce the risk that unaligned AIs will lead to hyperexistential catastrophes
moral circle expansion
etc,
Intuitively, they all sound plausible enough to me. I can definitely imagine projects in those categories being competitive with our other grants, especially if and when our bar lowers to where I think the longtermist bar overall “should” be. That said, the specific details of those projects, individual researchers, and organizational structure and leadership matters as well[1], so it’s hard to give an answer writ large.
From a community building angle, I think junior researchers who try to work on these topics have a reasonably decent hit rate of progressing to doing important work in other longtermist areas. So I can imagine a reasonable community-building case to fund some talent development programs as well[2], though I haven’t done a BOTEC and again the specific details matter a lot.
For example, I’m rather hesitant to recommend funding to organizations where I view the leadership as having substantially higher-than-baseline rate of being interpersonally dangerous.
Okay, giving entirely my own professional view as I see it, absolutely not speaking for anybody else or the fund writ large:
To be honest, I’m not entirely sure what most of these organizations actually do research on, on a day-to-day basis. Here are some examples of what I understand to be the one-sentence pitch for many of these projects
figure out models of digital sentience
research on cooperation in large worlds
how to design AIs to reduce the risk that unaligned AIs will lead to hyperexistential catastrophes
moral circle expansion
etc,
Intuitively, they all sound plausible enough to me. I can definitely imagine projects in those categories being competitive with our other grants, especially if and when our bar lowers to where I think the longtermist bar overall “should” be. That said, the specific details of those projects, individual researchers, and organizational structure and leadership matters as well[1], so it’s hard to give an answer writ large.
From a community building angle, I think junior researchers who try to work on these topics have a reasonably decent hit rate of progressing to doing important work in other longtermist areas. So I can imagine a reasonable community-building case to fund some talent development programs as well[2], though I haven’t done a BOTEC and again the specific details matter a lot.
For example, I’m rather hesitant to recommend funding to organizations where I view the leadership as having substantially higher-than-baseline rate of being interpersonally dangerous.
I happen to have a small COI with one of the groups so were they to apply, I will likely recuse myself from the evaluation.