I may be unfairly lumping the LTFF in with EAIF (I’m skeptical of longtermism and wish it was less prominent in EA)
LTFF recently released a long payout report, which you might or might not find helpful to dive into. FWIW I think relatively few of our grants are contingent on philosophical longtermism, though many of them are probably only cost-effective if you think there’s non-trivial probability of large-scale AI and/or biorisk catastrophes in the next 20-100 years, in addition to other more specific worldviews that fund managers may have.
Thanks for linking to that! I appreciate the transparency in the write-up, and thanks for responding well to criticism. I don’t have the knowledge to evaluate the quality of the AI-related LTFF grants on AI. But I do have some experience in pandemic / aerosol disease transmission, and I’ve been pretty stunned by the lack of EA expertise in the space despite the attention. Others experts have told me they share the concern. It seems there is a strong bias in EA to source knowledge from “value-aligned” people that brand themselves as EAs, even if they aren’t the main experts in the field. That can result in a tendency to fund EA friends or friends-of-friends, or people they see as “value-aligned”, rather than proactively seeking out expertise. I’ve seen a few examples of it in EA funds and in other EA domains, but I don’t have a clear picture of how widespread the issue is. I also know EA funds doesn’t really have infrastructure set up to prevent such conflicts of interests. I don’t think the AWF and GHDF have as much of an issue because they have a much stronger evidence basis and therefore it is harder to argue funding friends is the most effective use of funds.
Thanks for the feedback! But this all sounds very generic and I don’t know how to interpret it. Can you give specific examples of pandemic/aerosol grantees we’ve funded but you think shouldn’t be funded, or (with their permission ofc) grants that we rejected that you think should be funded?
LTFF recently released a long payout report, which you might or might not find helpful to dive into. FWIW I think relatively few of our grants are contingent on philosophical longtermism, though many of them are probably only cost-effective if you think there’s non-trivial probability of large-scale AI and/or biorisk catastrophes in the next 20-100 years, in addition to other more specific worldviews that fund managers may have.
Thanks for linking to that! I appreciate the transparency in the write-up, and thanks for responding well to criticism. I don’t have the knowledge to evaluate the quality of the AI-related LTFF grants on AI. But I do have some experience in pandemic / aerosol disease transmission, and I’ve been pretty stunned by the lack of EA expertise in the space despite the attention. Others experts have told me they share the concern. It seems there is a strong bias in EA to source knowledge from “value-aligned” people that brand themselves as EAs, even if they aren’t the main experts in the field. That can result in a tendency to fund EA friends or friends-of-friends, or people they see as “value-aligned”, rather than proactively seeking out expertise. I’ve seen a few examples of it in EA funds and in other EA domains, but I don’t have a clear picture of how widespread the issue is. I also know EA funds doesn’t really have infrastructure set up to prevent such conflicts of interests. I don’t think the AWF and GHDF have as much of an issue because they have a much stronger evidence basis and therefore it is harder to argue funding friends is the most effective use of funds.
Thanks for the feedback! But this all sounds very generic and I don’t know how to interpret it. Can you give specific examples of pandemic/aerosol grantees we’ve funded but you think shouldn’t be funded, or (with their permission ofc) grants that we rejected that you think should be funded?
Happy to message or chat 1:1; I don’t want to dispute specific LTFF grants in the comment section.
DM’d, though I also think disputing specific LTFF grants in EA Forum comments is a time-honored tradition, see eg comments here.