I was extremely grateful for your donation and the impact Apart has had on individuals’ personal stories are what makes all this work worth it! So we really really appreciate this.
This is an in-depth answer to your questions (reasons behind this campaign, why the timeline, what we do, how this situation relates to the general AIS funding ecosystem, what mistakes we made, and a short overview of the impact report and newer numbers).
We’re extremely grateful for the response we’ve received on this campaign, such as the many personal comments and donations further down on the /donate page and on Manifund, and this is really what makes it exciting to be at Apart!
We have one of the more diverse funding pools of organizations in our position[1] but org-wide community-building funding mostly depends on OpenPhil, LTFF, and SFF. This situation comes after a pass from LTFF that was high confidence for us because we outperformed our last grant with them, but we misjudged that LTFF itself was underfunded, unfortunately. Additionally, OpenPhil has been a smaller part of our funding than we would have hoped.
The last-minute part of this campaign is largely a consequence of delayed response timelines (something that is pretty normal in the field, see further down for elaboration) along with somewhat limited engagement from OpenPhil’s GCR team on our grants throughout our lifetime.
I’ll also mention that non-profits generally spend immense amount of time with fundraising campaigns and what we feel is important to share transparently as part of this campaign are all the parts of our work that otherwise gets overlooked in a “max 200 words” grant application focused on field-building.
We’ve been surprised at how important anecdotes actually are and have prioritized them too little in our applications—everyone has shared their personal stories now and they are included across the campaign here as a result. Despite this, Apart was still the second highest-rated grant for our PI at LTFF and they simply had to reject it due to the size since they were themselves underfunded.
With OpenPhil, I think we’ve been somewhat unlucky with the depth of grant reviews and feedback from their side and missing the opportunity to respond to their uncertainties. Despite receiving some top-tier grants at SFF and LTFF in 2024, an organization like ours are dependent on larger OP grants unless we have successful long-term public campaigning similar to Wikimedia Foundation or direct interfacing with high net worth individuals, something every specialized non-profit outside AI safety need as they scale.
Hope that clarifies things a bit! We’ve consistently pivoted towards more and more impactful areas and I think Apart is now harvesting the impact of growing as an independent research lab. Our latest work is very exciting, the research is getting featured across media, backend software is in use now, and governments are calling us for help, so it’s unfortunate to find the organization in this situation.
For others raising funds, what Apart could have done to improve the situation is:
Stay more informed about funding shortfalls for specific foundations we rely on. This was especially important for the current situation.
Rely less on expectations that larger funding pools would follow AI capabilities advancement and the urgency of AI safety.
Related to the above point, avoid scaling based on the YoY trend line of funding Apart received from 2022 to 2023 to 2024 (conditional projected growth capacity) since this wasn’t followed up in 2025 and the mid-2024 scale may have been better to stay on for longer (though the speed of AI development leads to somewhat different conclusions and we didn’t grow more than 50% from last year’s budget).
Be more present in SF and London to interface face-to-face with the funders and provide answers to the uncertainties (they usually come back after the grant process and we often have relatively good answers to them but don’t have the chance to provide them before a decision is made).
Communicate more of our impact and our work throughout the EA and AIS community, beyond large academic participation and our communication to direct beneficiaries of our work, participants, newsletter subscribers, partners, etc. This is already under way but I would guess there’s a six month lead time or so on this.
Engage other visionary donors outside AIS to join in on the funding rounds, potentially under other attractive narratives (something that usually takes two years to activate and that I’m certain will be possible by 2026).
Rely less on previous results as evidence for forecasts of grant-making decisions.
With that said, I think we’ve acted as well as we could, and this campaign is part of our contingency plans, so here we are! We could’ve launched it earlier but that is a minor point. I’m confident the team will pull through, but I’ll be the first to say that the situation could be better.
The team and I believe a lot in the work that happens at Apart, and I’m happy that it seems our researchers and participants agree with us—we could of course solve it all by pivoting to something less impactful, but that would be silly.
So overall, this is a relatively normal situation for non-profits outside AI safety and we’re just in a place where the potential funders for AI safety community-building are few and far between. This is not a good situation for Apart, but it is what it is!
Some notes on what Apart does
Since this is a longer answer, it may also be worth it to clarify a few misunderstandings that sometimes come up around our work due to what seems like an early grounding of the ‘Apart narrative’ in the community that we haven’t worked enough to update:
“Apart is an undergraduate talent pipeline”: Apart’s impact happens mostly for mid-career adjacent technical talent while we of course give anyone the chance to be a part of AI safety simply based on the quality of their work (re: Mathias’ comment). E.g., the majority of participation in our research challenges / sprints are from graduate students and over (can double check the numbers but it’s about 30%-40% mid-career technical talent)
“Apart just does hackathons”: Our hackathons seem quite effective at engaging people directly with research topics without CV discrimination and we believe a lot in its impact. However, most of our labor hours are spent on our lab and research accelerator that helps people all over the world become actual researchers on their own time and engage with academia, something that makes us less visible on e.g. LessWrong than we maybe should’ve strategically been for the key funders to take notice. E.g. we had 9-15 researchers present at each of the latest three major AI research conferences (ICML, NeurIPS, ICLR) with multiple awards and features. See more in the Impact Report.
“Apart is focusing on mechanistic interpretability”: Most of our earliest papers were related to mechanistic interpretability due to the focus of our former research director, but our agenda is really to explore the frontier of under-explored agendas, which also may lead to less attention for our work within AI safety. Our earliest work in 2022 was on the LLM psychology agenda and model-human alignment with much of our latest work being in metrology (science of evals), “adjacent field X AGI”, and dark patterns; agendas that we generally believe are very under-explored. If we optimized for funding-friendliness, one could argue that we should focus even more on the agendas that receive attention and have open RFPs, but that is counter to our theory of change.
Funding ecosystem
The situation for Apart speaks to broader points about the AI safety funding ecosystem that I’ll leave here for others who may be curious about how an established organization like Apart may run a public campaign with such a short runway:
Other organizations within AI safety have a similarly high dependence on a few funders and also face significant funding cliffs—due to Apart’s focus on transparency and community engagement, Jason, our team, and I decided to approach it with this campaign since we believe a lot in open engagement with the community. I won’t name any names but this is one of those facts that is known among this small circle—all throughout 2023/24/25.
The current administration’s retraction of national science funding sources mean that there’s currently an even larger pool of grantees that request funding and even academic labs have to close, with many US academics moving to other countries. A key example includes $3B in funding terminated for Harvard University’s research.
Apart receives enough restricted funding for specific research projects and this is generally quite generous and available since there’s many foundations providing this (from OpenAI Inc. 501(c)(3) to Foresight Institute to Schmidt Sciences to aligned private companies) - this is not a big problem and you’ll see orgs focused on a single technical agenda have an “easier” time raising.
Our goal is the global talent engagement that we have seen a lot of success in (see the testimonials down on the https://apartresearch.com/donate page for a few examples) which is of course “field-building.” For this, there’s OpenPhil, LTFF, and SFF, besides a few new opportunities that are popping up that are more available for people with close networks to specific high net worth foundations (some of which we miss out on from being globally represented but not as well represented within the Bay Area).
The fact that funders aren’t extremely excited about specific work should generally be prioritized much lower than it is in EA and AIS. E.g. see the number of early investors passing on AirBnB and Amazon (40/60) and the repeated grant rejections and lack of funding for Dr. Katalin Karikó research. Rejections are a signal, but not necessarily about the underlying ideas. Every time we’ve had the chance to have longer conversations with grantmakers, they’ve been quite excited. However, this is not the standard partially due to grantmaking staffing shortage and lack of public RFPs the last few years.
The fact that EA and AIS has had very early funding from Dustin may have made the field relatively complacent and accepting of closing down projects if OP doesn’t fund it (which to be fair is sometimes a better signal than if e.g. NSF doesn’t) but the standard across non-profits are very aggressive multi-year courting campaigns for high net worth donors that can provide diversified funding along with large public campaigns engages very broadly on large missions. Most of the non-profits you know outside of AI safety have large teams focused solely on raising money, some of them earning $1.2M per year because they bring in much more. This is not known within AI safety because it has been disconnected from other non-profits. This would preferably not be the case, but it’s reality.
With that said, I am eternally grateful for the funding ecosystem around AIS since it is still much better than what e.g. the Ford Foundation or Google org provides in speed and feedback (e.g. 1 year response times, zero feedback, no response deadlines, etc.).
Appendix: Apart Research Impact Report V1.0
Since you’ve made it this far...
Our impact report makes Apart’s impact even clearer and it’s definitely worth a read!
If you’d like to hear about the personal impact we’ve had on the people who’ve been part of our journey, I highly recommend checking out the following:
Since V1.0, we’ve also fine-tuned and re-run parts of our impact evaluation pipeline even more and here’s a few more numbers:
Citations of our research from (excluding universities):Centre for the Governance of AI, IBM Research, Salesforce, Institute for AI Policy and Strategy, Anthropic, Centre for the Study of Existential Risk, UK Health Security Agency, EleutherAI, Stability AI, Meta AI Research, Google Research, Alibaba, Tencent, Amazon, Allen Institute for AI, Institute for AI in Medicine, Chinese Academy of Sciences, Baidu Inc., Indian Institute of Technology, State Key Laboratory of General Artificial Intelligence, Thomson-Reuters Foundational Research, Cisco Research, Oncodesign Precision Medicine, Institute for Infocomm Research, Vector Institute, Canadian Institute for Advanced Research, Meta AI, Google DeepMind, Microsoft Research NYC, MIT CSAIL, ALTA Institute, SERI, AI Quality & Testing Hub, Hessian Center for Artificial Intelligence, National Research Center for Applied Cybersecurity ATHENE, Far AI, Max Planck Institute for Intelligent Systems, Institute for Artificial Intelligence and Fundamental Interactions, National Biomarker Centre, Idiap Research Institute, Microsoft Research India, Ant Group, Alibaba Group, OpenAI, Adobe Research, Microsoft Research Asia, Space Telescope Science Institute, Meta GenAI, Cynch.ai, AE Studio, Language Technologies Institute, Ubisoft, Flowers TEAM, Robot Cognition Laboratory, Lossfunk, Munich Center for Machine Learning, Center for Information and Language Processing, São Paulo Research Foundation, National Council for Scientific and Technological Development
Job placements: Cofounder of stealth AI safety startup @ $20M valuation, METR, GDM, Anthropic, Martian (four placements as research leads of new mech-int team plus staff), Cooperative AI Foundation, Gray Swan AI, HiddenLayer, Succesif, AIforAnimals, Sentient Foundation, Leap Labs, EluetherAI, Suav Tech, Aintelope, AIS Cape Town, Human Intelligence, among others
Program placements: MATS, ERA Cambridge, ARENA, LASR, AISC, Pivotal Research, Constellation, among others
In terms of our funding pool diversity, it spans from our Lambda Labs sponsorship of $5k compute / team to tens of sponsorships from partners for research events, many large-scale (restricted) research grants, paid research collaborations, and quite a few $90k-$400k general org support grants from every funder you know and love.
I was extremely grateful for your donation and the impact Apart has had on individuals’ personal stories are what makes all this work worth it! So we really really appreciate this.
This is an in-depth answer to your questions (reasons behind this campaign, why the timeline, what we do, how this situation relates to the general AIS funding ecosystem, what mistakes we made, and a short overview of the impact report and newer numbers).
Read the campaign page and the Apart Research Impact Report V1.0 before you continue.
This campaign
We’re extremely grateful for the response we’ve received on this campaign, such as the many personal comments and donations further down on the /donate page and on Manifund, and this is really what makes it exciting to be at Apart!
We have one of the more diverse funding pools of organizations in our position[1] but org-wide community-building funding mostly depends on OpenPhil, LTFF, and SFF. This situation comes after a pass from LTFF that was high confidence for us because we outperformed our last grant with them, but we misjudged that LTFF itself was underfunded, unfortunately. Additionally, OpenPhil has been a smaller part of our funding than we would have hoped.
The last-minute part of this campaign is largely a consequence of delayed response timelines (something that is pretty normal in the field, see further down for elaboration) along with somewhat limited engagement from OpenPhil’s GCR team on our grants throughout our lifetime.
I’ll also mention that non-profits generally spend immense amount of time with fundraising campaigns and what we feel is important to share transparently as part of this campaign are all the parts of our work that otherwise gets overlooked in a “max 200 words” grant application focused on field-building.
We’ve been surprised at how important anecdotes actually are and have prioritized them too little in our applications—everyone has shared their personal stories now and they are included across the campaign here as a result. Despite this, Apart was still the second highest-rated grant for our PI at LTFF and they simply had to reject it due to the size since they were themselves underfunded.
With OpenPhil, I think we’ve been somewhat unlucky with the depth of grant reviews and feedback from their side and missing the opportunity to respond to their uncertainties. Despite receiving some top-tier grants at SFF and LTFF in 2024, an organization like ours are dependent on larger OP grants unless we have successful long-term public campaigning similar to Wikimedia Foundation or direct interfacing with high net worth individuals, something every specialized non-profit outside AI safety need as they scale.
Hope that clarifies things a bit! We’ve consistently pivoted towards more and more impactful areas and I think Apart is now harvesting the impact of growing as an independent research lab. Our latest work is very exciting, the research is getting featured across media, backend software is in use now, and governments are calling us for help, so it’s unfortunate to find the organization in this situation.
For others raising funds, what Apart could have done to improve the situation is:
Stay more informed about funding shortfalls for specific foundations we rely on. This was especially important for the current situation.
Rely less on expectations that larger funding pools would follow AI capabilities advancement and the urgency of AI safety.
Related to the above point, avoid scaling based on the YoY trend line of funding Apart received from 2022 to 2023 to 2024 (conditional projected growth capacity) since this wasn’t followed up in 2025 and the mid-2024 scale may have been better to stay on for longer (though the speed of AI development leads to somewhat different conclusions and we didn’t grow more than 50% from last year’s budget).
Be more present in SF and London to interface face-to-face with the funders and provide answers to the uncertainties (they usually come back after the grant process and we often have relatively good answers to them but don’t have the chance to provide them before a decision is made).
Communicate more of our impact and our work throughout the EA and AIS community, beyond large academic participation and our communication to direct beneficiaries of our work, participants, newsletter subscribers, partners, etc. This is already under way but I would guess there’s a six month lead time or so on this.
Engage other visionary donors outside AIS to join in on the funding rounds, potentially under other attractive narratives (something that usually takes two years to activate and that I’m certain will be possible by 2026).
Rely less on previous results as evidence for forecasts of grant-making decisions.
With that said, I think we’ve acted as well as we could, and this campaign is part of our contingency plans, so here we are! We could’ve launched it earlier but that is a minor point. I’m confident the team will pull through, but I’ll be the first to say that the situation could be better.
The team and I believe a lot in the work that happens at Apart, and I’m happy that it seems our researchers and participants agree with us—we could of course solve it all by pivoting to something less impactful, but that would be silly.
So overall, this is a relatively normal situation for non-profits outside AI safety and we’re just in a place where the potential funders for AI safety community-building are few and far between. This is not a good situation for Apart, but it is what it is!
Some notes on what Apart does
Since this is a longer answer, it may also be worth it to clarify a few misunderstandings that sometimes come up around our work due to what seems like an early grounding of the ‘Apart narrative’ in the community that we haven’t worked enough to update:
“Apart is an undergraduate talent pipeline”: Apart’s impact happens mostly for mid-career adjacent technical talent while we of course give anyone the chance to be a part of AI safety simply based on the quality of their work (re: Mathias’ comment). E.g., the majority of participation in our research challenges / sprints are from graduate students and over (can double check the numbers but it’s about 30%-40% mid-career technical talent)
“Apart just does hackathons”: Our hackathons seem quite effective at engaging people directly with research topics without CV discrimination and we believe a lot in its impact. However, most of our labor hours are spent on our lab and research accelerator that helps people all over the world become actual researchers on their own time and engage with academia, something that makes us less visible on e.g. LessWrong than we maybe should’ve strategically been for the key funders to take notice. E.g. we had 9-15 researchers present at each of the latest three major AI research conferences (ICML, NeurIPS, ICLR) with multiple awards and features. See more in the Impact Report.
“Apart is focusing on mechanistic interpretability”: Most of our earliest papers were related to mechanistic interpretability due to the focus of our former research director, but our agenda is really to explore the frontier of under-explored agendas, which also may lead to less attention for our work within AI safety. Our earliest work in 2022 was on the LLM psychology agenda and model-human alignment with much of our latest work being in metrology (science of evals), “adjacent field X AGI”, and dark patterns; agendas that we generally believe are very under-explored. If we optimized for funding-friendliness, one could argue that we should focus even more on the agendas that receive attention and have open RFPs, but that is counter to our theory of change.
Funding ecosystem
The situation for Apart speaks to broader points about the AI safety funding ecosystem that I’ll leave here for others who may be curious about how an established organization like Apart may run a public campaign with such a short runway:
Other organizations within AI safety have a similarly high dependence on a few funders and also face significant funding cliffs—due to Apart’s focus on transparency and community engagement, Jason, our team, and I decided to approach it with this campaign since we believe a lot in open engagement with the community. I won’t name any names but this is one of those facts that is known among this small circle—all throughout 2023/24/25.
The current administration’s retraction of national science funding sources mean that there’s currently an even larger pool of grantees that request funding and even academic labs have to close, with many US academics moving to other countries. A key example includes $3B in funding terminated for Harvard University’s research.
Apart receives enough restricted funding for specific research projects and this is generally quite generous and available since there’s many foundations providing this (from OpenAI Inc. 501(c)(3) to Foresight Institute to Schmidt Sciences to aligned private companies) - this is not a big problem and you’ll see orgs focused on a single technical agenda have an “easier” time raising.
Our goal is the global talent engagement that we have seen a lot of success in (see the testimonials down on the https://apartresearch.com/donate page for a few examples) which is of course “field-building.” For this, there’s OpenPhil, LTFF, and SFF, besides a few new opportunities that are popping up that are more available for people with close networks to specific high net worth foundations (some of which we miss out on from being globally represented but not as well represented within the Bay Area).
The fact that funders aren’t extremely excited about specific work should generally be prioritized much lower than it is in EA and AIS. E.g. see the number of early investors passing on AirBnB and Amazon (40/60) and the repeated grant rejections and lack of funding for Dr. Katalin Karikó research. Rejections are a signal, but not necessarily about the underlying ideas. Every time we’ve had the chance to have longer conversations with grantmakers, they’ve been quite excited. However, this is not the standard partially due to grantmaking staffing shortage and lack of public RFPs the last few years.
The fact that EA and AIS has had very early funding from Dustin may have made the field relatively complacent and accepting of closing down projects if OP doesn’t fund it (which to be fair is sometimes a better signal than if e.g. NSF doesn’t) but the standard across non-profits are very aggressive multi-year courting campaigns for high net worth donors that can provide diversified funding along with large public campaigns engages very broadly on large missions. Most of the non-profits you know outside of AI safety have large teams focused solely on raising money, some of them earning $1.2M per year because they bring in much more. This is not known within AI safety because it has been disconnected from other non-profits. This would preferably not be the case, but it’s reality.
With that said, I am eternally grateful for the funding ecosystem around AIS since it is still much better than what e.g. the Ford Foundation or Google org provides in speed and feedback (e.g. 1 year response times, zero feedback, no response deadlines, etc.).
Appendix: Apart Research Impact Report V1.0
Since you’ve made it this far...
Our impact report makes Apart’s impact even clearer and it’s definitely worth a read!
If you’d like to hear about the personal impact we’ve had on the people who’ve been part of our journey, I highly recommend checking out the following:
/donate page testimonials (36)
Donation messages (11)
Manifund campaign comments (17)
Since V1.0, we’ve also fine-tuned and re-run parts of our impact evaluation pipeline even more and here’s a few more numbers:
Citations of our research from (excluding universities): Centre for the Governance of AI, IBM Research, Salesforce, Institute for AI Policy and Strategy, Anthropic, Centre for the Study of Existential Risk, UK Health Security Agency, EleutherAI, Stability AI, Meta AI Research, Google Research, Alibaba, Tencent, Amazon, Allen Institute for AI, Institute for AI in Medicine, Chinese Academy of Sciences, Baidu Inc., Indian Institute of Technology, State Key Laboratory of General Artificial Intelligence, Thomson-Reuters Foundational Research, Cisco Research, Oncodesign Precision Medicine, Institute for Infocomm Research, Vector Institute, Canadian Institute for Advanced Research, Meta AI, Google DeepMind, Microsoft Research NYC, MIT CSAIL, ALTA Institute, SERI, AI Quality & Testing Hub, Hessian Center for Artificial Intelligence, National Research Center for Applied Cybersecurity ATHENE, Far AI, Max Planck Institute for Intelligent Systems, Institute for Artificial Intelligence and Fundamental Interactions, National Biomarker Centre, Idiap Research Institute, Microsoft Research India, Ant Group, Alibaba Group, OpenAI, Adobe Research, Microsoft Research Asia, Space Telescope Science Institute, Meta GenAI, Cynch.ai, AE Studio, Language Technologies Institute, Ubisoft, Flowers TEAM, Robot Cognition Laboratory, Lossfunk, Munich Center for Machine Learning, Center for Information and Language Processing, São Paulo Research Foundation, National Council for Scientific and Technological Development
Job placements: Cofounder of stealth AI safety startup @ $20M valuation, METR, GDM, Anthropic, Martian (four placements as research leads of new mech-int team plus staff), Cooperative AI Foundation, Gray Swan AI, HiddenLayer, Succesif, AIforAnimals, Sentient Foundation, Leap Labs, EluetherAI, Suav Tech, Aintelope, AIS Cape Town, Human Intelligence, among others
Program placements: MATS, ERA Cambridge, ARENA, LASR, AISC, Pivotal Research, Constellation, among others
In terms of our funding pool diversity, it spans from our Lambda Labs sponsorship of $5k compute / team to tens of sponsorships from partners for research events, many large-scale (restricted) research grants, paid research collaborations, and quite a few $90k-$400k general org support grants from every funder you know and love.