This fund was spun out of the Long-Term Future Fund (LTFF), which makes grants aiming to reduce existential risk. Over the last five years, the LTFF has made hundreds of grants, specifically in AI risk mitigation, totalling over $20 million. Our team includes AI safety researchers, expert forecasters, policy researchers, and experienced grantmakers. We are advised by staff from frontier labs, AI safety nonprofits, leading think tanks, and others.
More recently ARM Fund has been doing active grantmaking in AIS areas, we’ll likely write more about this soon. I expect the funds to become much more differentiated in staff in the next few months (though that’s not a commitment). Longer term, I’d like them to be pretty separate entities but for now they share roughly the same staff.
I think ARM Fund is still trying to figure out its identity, but roughly the fund was created to be something where you should be happy to refer your non-EA, non-longtermist friends (e.g. in tech) to check out, if they are interested in making donations to organizations working on reducing catastrophic AI risk but aren’t willing (or in some cases able) to put in the time to investigate specific projects.
Philosophically, I expect it (including the advisors and future grant evaluators) to care moderately less than LTFF about e.g. the exact difference between catastrophic risks and extinction risks, though it will still focus only on real catastrophic risks and not safetywash other near-time issues.
The main difference in actions so far is that the ARM Fund has focussed on active grantmaking (e.g. in AI x information security fieldbuilding). In contrast, the LTFF has a more democratic and passive grantmaking focus. I also don’t think that ARM Fund has reached product market fit yet, it’s done a few things reasonably well but I don’t think it has a scalable product (unless we decide to do a lot more active grantmaking but so far that has been more opportunistic).
More recently ARM Fund has been doing active grantmaking in AIS areas, we’ll likely write more about this soon. I expect the funds to become much more differentiated in staff in the next few months (though that’s not a commitment). Longer term, I’d like them to be pretty separate entities but for now they share roughly the same staff.
Is there a difference in philosophy, setup, approach etc between the two funds?
I think ARM Fund is still trying to figure out its identity, but roughly the fund was created to be something where you should be happy to refer your non-EA, non-longtermist friends (e.g. in tech) to check out, if they are interested in making donations to organizations working on reducing catastrophic AI risk but aren’t willing (or in some cases able) to put in the time to investigate specific projects.
Philosophically, I expect it (including the advisors and future grant evaluators) to care moderately less than LTFF about e.g. the exact difference between catastrophic risks and extinction risks, though it will still focus only on real catastrophic risks and not safetywash other near-time issues.
The main difference in actions so far is that the ARM Fund has focussed on active grantmaking (e.g. in AI x information security fieldbuilding). In contrast, the LTFF has a more democratic and passive grantmaking focus. I also don’t think that ARM Fund has reached product market fit yet, it’s done a few things reasonably well but I don’t think it has a scalable product (unless we decide to do a lot more active grantmaking but so far that has been more opportunistic).