The current fund managers predominantly have expertise in AI and macrostrategy. For grant evaluations related to other existential risks or longtermist causes, we plan to continue to get external advice.
Iām glad both that you explicitly acknowledge this potential limitation of the LTFF, and that you have that plan in place for addressing it.
One of the handful of things Iāve previously felt a bit uncomfortable/āuncertain about regarding the LTFF was that it (a) seemed to mostly have AI-focused fund managers and mostly(?) give to AI-related things, yet (b) presented itself as interested in a broader range of longtermist issues and didnāt make it clear that it would focus much more on AI than on other things.
I didnāt see this as a major flaw, since:
I do think AI should receive more longtermist attention than any single other topic (though not >50% of all longtermist attention)
The LTFF did also give to some other things
The LTFF did report all its payouts and at least snippets of its reasoning for each decision
But the situation still seemed not quite ideal.
I still donāt feel sure that the LTFF is doing everything it should on this front, but it now seems likelier that thatās the case or almost the case.
We may also appoint fund managers who are experts in those areas, but given that the number of applications in other individual categories is relatively small, we tentatively prefer appointing fund managers with a generalist longtermist background.
That sounds reasonable to me.
I think Iād personally see it as ideal if the LTFF always had at least one member who focuses at least 50% of their efforts on longtermist priorities other than AI (e.g. biorisk, nuclear risk, forecasting, improving policy, global governance).
It seems a bit of a shame that that isnāt currently the case.
Iām not saying EA Funds made bad hiring decisions; there are of course other considerations at play when deciding about specific candidates.
But I think Ozzie Gooen and maybe Daniel Eth would count, so having them as guest managersāas well as drawing out other people as advisors where relevantāseems to help on this front.
I donāt think Oliver Habryka would count for what I have in mind, even though I assume he spends less than 50% of his time on AI
Though Iām not at all saying he shouldnāt be on the committee, and Iāve very much appreciated his detailed writeups about LTFF decisions.
And in any case, it doesnāt seem very important to me that thereās at least one person who focuses at least 50% of their efforts on a single, specific longtermist priority other than AI (as opposed to a grab bag of longtermist priorities other than AI). So ātentatively prefer[ing] appointing fund managers with a generalist longtermist backgroundā, rather than ones with expertise in a specific non-AI area, seems fine to me.
Iām glad both that you explicitly acknowledge this potential limitation of the LTFF, and that you have that plan in place for addressing it.
One of the handful of things Iāve previously felt a bit uncomfortable/āuncertain about regarding the LTFF was that it (a) seemed to mostly have AI-focused fund managers and mostly(?) give to AI-related things, yet (b) presented itself as interested in a broader range of longtermist issues and didnāt make it clear that it would focus much more on AI than on other things.
I didnāt see this as a major flaw, since:
I do think AI should receive more longtermist attention than any single other topic (though not >50% of all longtermist attention)
The LTFF did also give to some other things
The LTFF did report all its payouts and at least snippets of its reasoning for each decision
But the situation still seemed not quite ideal.
I still donāt feel sure that the LTFF is doing everything it should on this front, but it now seems likelier that thatās the case or almost the case.
That sounds reasonable to me.
I think Iād personally see it as ideal if the LTFF always had at least one member who focuses at least 50% of their efforts on longtermist priorities other than AI (e.g. biorisk, nuclear risk, forecasting, improving policy, global governance).
It seems a bit of a shame that that isnāt currently the case.
Iām not saying EA Funds made bad hiring decisions; there are of course other considerations at play when deciding about specific candidates.
But I think Ozzie Gooen and maybe Daniel Eth would count, so having them as guest managersāas well as drawing out other people as advisors where relevantāseems to help on this front.
I donāt think Oliver Habryka would count for what I have in mind, even though I assume he spends less than 50% of his time on AI
Though Iām not at all saying he shouldnāt be on the committee, and Iāve very much appreciated his detailed writeups about LTFF decisions.
And in any case, it doesnāt seem very important to me that thereās at least one person who focuses at least 50% of their efforts on a single, specific longtermist priority other than AI (as opposed to a grab bag of longtermist priorities other than AI). So ātentatively prefer[ing] appointing fund managers with a generalist longtermist backgroundā, rather than ones with expertise in a specific non-AI area, seems fine to me.
Thanks, I personally agree with these points, and I think this is a useful input for our internal discussion.