🔶
Esben Kran
Great reasoning! If you haven’t already, would include a consideration for yourself of how much you think you would 1) contribute to others’ impact (inspiring donation %?) and 2) how much it’d improve your own (new career, new projects, new donation opportunities discovered) in the equation. These events are well-funded for generally pretty good reasons :)
Yeah, makes a lot of sense! I think of mid-tier not as offensive since it’s also just about Gwern spending all his time on writing vs. Kat Woods running an organization as well—huge respect to both of course for what they do.
Great post, hadn’t seen that one before.
I’ll also mention that I don’t think SoTA philosophy happens in any way within any of the areas that Luke mentions. If this is classified as academic philosophy, then that’s definitely fair. But if you look at where philosophy is developed the most (outside of imaginary parallel worlds) in my eyes, it’s the summaries of academic work on consciousness (The Conscious Mind), computer science (Gödel, Escher, Bach), AI (Superintelligence), genetic foundations for morals (Blueprint for Civilization), empirical studies of human behavior in moral scenarios (Thinking, Fast and Slow), politics (Expert Political Judgment), cognitive enhancement (Tools for Thought), and neuroscience (The Brain from Inside Out), all of which have academic centres of excellence that are very inspiring.
Like, the place philosophers who truly want to understand the philosophical underpinnings of reality go today looks very very different than it did during the renaissance, in the sense that we now have instruments and mathematics that can measure ethics, morals, and the fundamental properties of reality.
But then I guess you end up with something like Kat Woods vs. Uri Hasson or something like that, and that’s not a comparison I’d necessarily make. And separately, what Yann lacks in holistic reasoning, he does make up for with the technical work he’s done (though he definitely peaked in ’96).
The same for philosophy. What are some examples where theory of philosophy on the forums is significantly better than e.g. the best book on the topic of that year? I can totally buy this, but my philosophy studies during cognitive science were definitely extremely high quality and much better than most work on the forums.
Then of course add that EAs are also present in academia and maybe the picture gets more muddled.
Thanks for the overview! I agree with decorrelating this movement for a few reasons:
EA’s critique culture has destroyed innovation in the field and is often the reason that a potentially impactful project doesn’t exist or is super neutered. Focus on empowering each other towards moral ambition here is great.
The name Effective Altruism is very academic and unrelatable for most people discovering it for the first time. And the same is true for its community. It’s rare that the community you enter when you enter EA is action-oriented, innovative, and dynamic.
EA has indeed been hit by a few truckloads of controversy recently which is good to try to give other options for down the line.
On another note, just noticed the reference to Scandinavian EAs and wanted to give my quick take:
This varies locally, my impression is that it’s more common in the Bay Area or Oxford. Scandinavian EAs, for example, are often content doing the 5th most impactul thing they could be doing, celebrating the gains they’ve made by not just doing some random thing. This is highly anecdotal.
I think the Copenhagen EAs have consistently been chasing the most impactful thing out there but it is true that the bets have been somewhat decorrelated from other EA projects. E.g., Danes now run Upstream Policy, ControlAI’s governance, Apart Research, Snake Anti-Venom, Seldon, Screwworm Free Future, among others, all of which have ToCs that are slightly different from core EA but that I personally think are more impactful than most other projects in their category per dollar.
I’m uncertain where the “5th most impactful” thing comes from here, and I may just be under-informed about our neighbors.
Great post and I agree! Curious about one point:
> 6. Academics often prefer writing papers to blog posts. Papers can seem more prestigious and don’t get annoying negative comments. To the degree that prestige is directly valuable this is useful, but for most things I prefer blog posts / Facebook posts. I think there are a bunch of “mid-tier” LessWrong/ EA Forum writers who I value much dramatically than many (far more prestigious) academics.What are examples of comparisons between far more prestigious academics and mid-tier LW/EAF writers? Curious about what the baselines here are because it’s definitely a bit harder for me to make this comparison.
You can get a subsidized free ticket if you apply for it :-)
To me, it’s an interesting decision to pull funding because of this type of coverage. There’s a tendency in AIS lobbying to never say what we actually mean to “get in the right rooms” but then when we want to say the thing that matters at a pivotal time, nobody will listen because we got there by being quiet.
Buckling under the pressure of the biggest lobby to ever exist (tech) putting out one or two hit pieces is really unfortunate. Same arguments can be made for why UK AISI and the AI Safety Summits didn’t become even bigger; simply that there was no will to continue the lobbying move and everyone was too afraid of reputation.
Happy to hear alternative perspectives, of course.
I will mention that an explicit goal with the research hackathon community server we run is that there’s no to little interaction between hackathons since people should be out in the world doing direct work.
For us, this means that we invite them into our research lab or they continue work other places, instead of being addicted. So rather than optimizing for engagement, optimize for information input / action output ratio when visiting.
OP has not pulled any funding. They’ve provided a few smaller grants over the last years that have been pivotal to Apart’s journey and I’m extremely grateful for this. OP has been a minority of Apart’s funding and the lack of support for the growth has been somewhat hard to decipher for us. Generally happy to chat more with OP staff about this, if anyone wish to reach out, of course.
I was extremely grateful for your donation and the impact Apart has had on individuals’ personal stories are what makes all this work worth it! So we really really appreciate this.
This is an in-depth answer to your questions (reasons behind this campaign, why the timeline, what we do, how this situation relates to the general AIS funding ecosystem, what mistakes we made, and a short overview of the impact report and newer numbers).
Read the campaign page and the Apart Research Impact Report V1.0 before you continue.
This campaign
We’re extremely grateful for the response we’ve received on this campaign, such as the many personal comments and donations further down on the /donate page and on Manifund, and this is really what makes it exciting to be at Apart!
We have one of the more diverse funding pools of organizations in our position[1] but org-wide community-building funding mostly depends on OpenPhil, LTFF, and SFF. This situation comes after a pass from LTFF that was high confidence for us because we outperformed our last grant with them, but we misjudged that LTFF itself was underfunded, unfortunately. Additionally, OpenPhil has been a smaller part of our funding than we would have hoped.
The last-minute part of this campaign is largely a consequence of delayed response timelines (something that is pretty normal in the field, see further down for elaboration) along with somewhat limited engagement from OpenPhil’s GCR team on our grants throughout our lifetime.
I’ll also mention that non-profits generally spend immense amount of time with fundraising campaigns and what we feel is important to share transparently as part of this campaign are all the parts of our work that otherwise gets overlooked in a “max 200 words” grant application focused on field-building.
We’ve been surprised at how important anecdotes actually are and have prioritized them too little in our applications—everyone has shared their personal stories now and they are included across the campaign here as a result. Despite this, Apart was still the second highest-rated grant for our PI at LTFF and they simply had to reject it due to the size since they were themselves underfunded.
With OpenPhil, I think we’ve been somewhat unlucky with the depth of grant reviews and feedback from their side and missing the opportunity to respond to their uncertainties. Despite receiving some top-tier grants at SFF and LTFF in 2024, an organization like ours are dependent on larger OP grants unless we have successful long-term public campaigning similar to Wikimedia Foundation or direct interfacing with high net worth individuals, something every specialized non-profit outside AI safety need as they scale.
Hope that clarifies things a bit! We’ve consistently pivoted towards more and more impactful areas and I think Apart is now harvesting the impact of growing as an independent research lab. Our latest work is very exciting, the research is getting featured across media, backend software is in use now, and governments are calling us for help, so it’s unfortunate to find the organization in this situation.
For others raising funds, what Apart could have done to improve the situation is:
Stay more informed about funding shortfalls for specific foundations we rely on. This was especially important for the current situation.
Rely less on expectations that larger funding pools would follow AI capabilities advancement and the urgency of AI safety.
Related to the above point, avoid scaling based on the YoY trend line of funding Apart received from 2022 to 2023 to 2024 (conditional projected growth capacity) since this wasn’t followed up in 2025 and the mid-2024 scale may have been better to stay on for longer (though the speed of AI development leads to somewhat different conclusions and we didn’t grow more than 50% from last year’s budget).
Be more present in SF and London to interface face-to-face with the funders and provide answers to the uncertainties (they usually come back after the grant process and we often have relatively good answers to them but don’t have the chance to provide them before a decision is made).
Communicate more of our impact and our work throughout the EA and AIS community, beyond large academic participation and our communication to direct beneficiaries of our work, participants, newsletter subscribers, partners, etc. This is already under way but I would guess there’s a six month lead time or so on this.
Engage other visionary donors outside AIS to join in on the funding rounds, potentially under other attractive narratives (something that usually takes two years to activate and that I’m certain will be possible by 2026).
Rely less on previous results as evidence for forecasts of grant-making decisions.
With that said, I think we’ve acted as well as we could, and this campaign is part of our contingency plans, so here we are! We could’ve launched it earlier but that is a minor point. I’m confident the team will pull through, but I’ll be the first to say that the situation could be better.
The team and I believe a lot in the work that happens at Apart, and I’m happy that it seems our researchers and participants agree with us—we could of course solve it all by pivoting to something less impactful, but that would be silly.
So overall, this is a relatively normal situation for non-profits outside AI safety and we’re just in a place where the potential funders for AI safety community-building are few and far between. This is not a good situation for Apart, but it is what it is!
Some notes on what Apart does
Since this is a longer answer, it may also be worth it to clarify a few misunderstandings that sometimes come up around our work due to what seems like an early grounding of the ‘Apart narrative’ in the community that we haven’t worked enough to update:
“Apart is an undergraduate talent pipeline”: Apart’s impact happens mostly for mid-career adjacent technical talent while we of course give anyone the chance to be a part of AI safety simply based on the quality of their work (re: Mathias’ comment). E.g., the majority of participation in our research challenges / sprints are from graduate students and over (can double check the numbers but it’s about 30%-40% mid-career technical talent)
“Apart just does hackathons”: Our hackathons seem quite effective at engaging people directly with research topics without CV discrimination and we believe a lot in its impact. However, most of our labor hours are spent on our lab and research accelerator that helps people all over the world become actual researchers on their own time and engage with academia, something that makes us less visible on e.g. LessWrong than we maybe should’ve strategically been for the key funders to take notice. E.g. we had 9-15 researchers present at each of the latest three major AI research conferences (ICML, NeurIPS, ICLR) with multiple awards and features. See more in the Impact Report.
“Apart is focusing on mechanistic interpretability”: Most of our earliest papers were related to mechanistic interpretability due to the focus of our former research director, but our agenda is really to explore the frontier of under-explored agendas, which also may lead to less attention for our work within AI safety. Our earliest work in 2022 was on the LLM psychology agenda and model-human alignment with much of our latest work being in metrology (science of evals), “adjacent field X AGI”, and dark patterns; agendas that we generally believe are very under-explored. If we optimized for funding-friendliness, one could argue that we should focus even more on the agendas that receive attention and have open RFPs, but that is counter to our theory of change.
Funding ecosystem
The situation for Apart speaks to broader points about the AI safety funding ecosystem that I’ll leave here for others who may be curious about how an established organization like Apart may run a public campaign with such a short runway:
Other organizations within AI safety have a similarly high dependence on a few funders and also face significant funding cliffs—due to Apart’s focus on transparency and community engagement, Jason, our team, and I decided to approach it with this campaign since we believe a lot in open engagement with the community. I won’t name any names but this is one of those facts that is known among this small circle—all throughout 2023/24/25.
The current administration’s retraction of national science funding sources mean that there’s currently an even larger pool of grantees that request funding and even academic labs have to close, with many US academics moving to other countries. A key example includes $3B in funding terminated for Harvard University’s research.
Apart receives enough restricted funding for specific research projects and this is generally quite generous and available since there’s many foundations providing this (from OpenAI Inc. 501(c)(3) to Foresight Institute to Schmidt Sciences to aligned private companies) - this is not a big problem and you’ll see orgs focused on a single technical agenda have an “easier” time raising.
Our goal is the global talent engagement that we have seen a lot of success in (see the testimonials down on the https://apartresearch.com/donate page for a few examples) which is of course “field-building.” For this, there’s OpenPhil, LTFF, and SFF, besides a few new opportunities that are popping up that are more available for people with close networks to specific high net worth foundations (some of which we miss out on from being globally represented but not as well represented within the Bay Area).
The fact that funders aren’t extremely excited about specific work should generally be prioritized much lower than it is in EA and AIS. E.g. see the number of early investors passing on AirBnB and Amazon (40/60) and the repeated grant rejections and lack of funding for Dr. Katalin Karikó research. Rejections are a signal, but not necessarily about the underlying ideas. Every time we’ve had the chance to have longer conversations with grantmakers, they’ve been quite excited. However, this is not the standard partially due to grantmaking staffing shortage and lack of public RFPs the last few years.
The fact that EA and AIS has had very early funding from Dustin may have made the field relatively complacent and accepting of closing down projects if OP doesn’t fund it (which to be fair is sometimes a better signal than if e.g. NSF doesn’t) but the standard across non-profits are very aggressive multi-year courting campaigns for high net worth donors that can provide diversified funding along with large public campaigns engages very broadly on large missions. Most of the non-profits you know outside of AI safety have large teams focused solely on raising money, some of them earning $1.2M per year because they bring in much more. This is not known within AI safety because it has been disconnected from other non-profits. This would preferably not be the case, but it’s reality.
With that said, I am eternally grateful for the funding ecosystem around AIS since it is still much better than what e.g. the Ford Foundation or Google org provides in speed and feedback (e.g. 1 year response times, zero feedback, no response deadlines, etc.).
Appendix: Apart Research Impact Report V1.0
Since you’ve made it this far...
Our impact report makes Apart’s impact even clearer and it’s definitely worth a read!
https://apartresearch.com/impact/reportIf you’d like to hear about the personal impact we’ve had on the people who’ve been part of our journey, I highly recommend checking out the following:
Since V1.0, we’ve also fine-tuned and re-run parts of our impact evaluation pipeline even more and here’s a few more numbers:
Citations of our research from (excluding universities): Centre for the Governance of AI, IBM Research, Salesforce, Institute for AI Policy and Strategy, Anthropic, Centre for the Study of Existential Risk, UK Health Security Agency, EleutherAI, Stability AI, Meta AI Research, Google Research, Alibaba, Tencent, Amazon, Allen Institute for AI, Institute for AI in Medicine, Chinese Academy of Sciences, Baidu Inc., Indian Institute of Technology, State Key Laboratory of General Artificial Intelligence, Thomson-Reuters Foundational Research, Cisco Research, Oncodesign Precision Medicine, Institute for Infocomm Research, Vector Institute, Canadian Institute for Advanced Research, Meta AI, Google DeepMind, Microsoft Research NYC, MIT CSAIL, ALTA Institute, SERI, AI Quality & Testing Hub, Hessian Center for Artificial Intelligence, National Research Center for Applied Cybersecurity ATHENE, Far AI, Max Planck Institute for Intelligent Systems, Institute for Artificial Intelligence and Fundamental Interactions, National Biomarker Centre, Idiap Research Institute, Microsoft Research India, Ant Group, Alibaba Group, OpenAI, Adobe Research, Microsoft Research Asia, Space Telescope Science Institute, Meta GenAI, Cynch.ai, AE Studio, Language Technologies Institute, Ubisoft, Flowers TEAM, Robot Cognition Laboratory, Lossfunk, Munich Center for Machine Learning, Center for Information and Language Processing, São Paulo Research Foundation, National Council for Scientific and Technological Development
Job placements: Cofounder of stealth AI safety startup @ $20M valuation, METR, GDM, Anthropic, Martian (four placements as research leads of new mech-int team plus staff), Cooperative AI Foundation, Gray Swan AI, HiddenLayer, Succesif, AIforAnimals, Sentient Foundation, Leap Labs, EluetherAI, Suav Tech, Aintelope, AIS Cape Town, Human Intelligence, among others
Program placements: MATS, ERA Cambridge, ARENA, LASR, AISC, Pivotal Research, Constellation, among others
- ^
In terms of our funding pool diversity, it spans from our Lambda Labs sponsorship of $5k compute / team to tens of sponsorships from partners for research events, many large-scale (restricted) research grants, paid research collaborations, and quite a few $90k-$400k general org support grants from every funder you know and love.
The main effect of regulation is to control certain net negative outcomes and hence slowing down negative AGIs. RSPs that require stopping developing at ASL-4 or otherwise are also under the pausing agenda. It might be a question of semantics due to how Pause AI and the Pause AI Letter have become the memetic sink for the term pause AI?
Great post, thank you for laying out the realities of the situation.
In my view, there are currently three main strategies pursued to solve X-risk:
Slow / pause AI: Regulation, international coordination, and grassroots movements. Examples include UK AISI, EU AI Act, SB1047, METR, demonstrations, and PauseAI.
Superintelligence security: From infrastructure hardening, RSPs, security at labs, and new internet protocols to defense of financial markets, defense against slaughterbots, and civilizational hedging strategies. Examples include UK ARIA, AI control, and some labs.
Hope in AGI: Developing the aligned AGI and hoping it will solve all our problems. Examples include Anthropic and arguably most other AGI labs.
No. (3) seems weirdly overrated in AI safety circles. (1) seems incredibly important now and something radically under-emphasized. And in my eyes, (2) seems like the direction most new technical work should go. I will refer to Anthropic’s safety researchers on whether the labs have a plan outside of (3).
Echoing @Buck’s point that you now have less need to be inside a lab for model access reasons. And if it’s to guide the organization, that has historically been somewhat futile in the face of capitalist incentives.
Answering on behalf of Apart Research!
We’re a non-profit research and community-building lab with a strategic target on high-volume frontier technical research. Apart is currently raising a round to run the lab throughout 2025 and 2026 but here I’ll describe what your marginal donation may enable.
In just two years, Apart Research has established itself as a unique and efficient part of the AI safety ecosystem. Our research output includes 13 peer-reviewed papers published since 2023 at top venues including NeurIPS, ICLR, ACL, and EMNLP, with six main conference papers and nine workshop acceptances. Our work has been cited by OpenAI’s Superalignment team, and our team members have contributed to significant publications like Anthropic’s “Sleeper Agents” paper.
With this track record, we’re able to capitalize on our position as an AI safety lab and mobilize our work to impactful frontiers of technical work in governance, research methodology, and AI control.
Besides our ability to accelerate a Lab fellow’s research career at an average direct cost of around $3k, enable research sprint participants for as little as $30, and enable growth at local groups at similar high price/impact ratios, your marginal donation can enable us to run further impactful projects:
Improved access to our program ($7k-$25k): Professional rewamp of our website and documentation would make our programs and research outputs more accessible to talented researchers worldwide. Besides our establishment as a lab through our paper acceptances, a redesign will help us cater even more to institutional funding and technical professionals, which will help scale our impact through valuable counterfactual funding and talent discovery. At the higher end, we will also be able to make our internal resources publicly available. These resources are specifically designed to accelerate AI safety technical careers.
Higher conference attendance support ($20k): Currently, we only support one fellow per team to attend conferences. Additional funding would enable a second team member to attend, at approximately $2k per person.
Improving worldview diversity in AI safety ($10k-$20k): We’ve been working on all continents now and find a lot of value in our approach to enable international and underrepresented professional talent (besides our work at organizations such as 7 of the top 10 universities). With this funding, you would enable more targeted outreach from Apart’s side and existing lab members’ participation in conferences to discuss and represent AI safety to otherwise underrepresented professional groups.
Continuing impactful research projects ($15k-$30k): We will be able to extend timely and critical research projects. For instance, we’re looking to port our cyber-evaluations work to Inspect, making it a permanent part of UK AISI catastrophic risk evaluations. Our recent paper also finds novel methods to test whether LLMs game public benchmarks and we would like to expand the work to run the same test on other high-impact benchmarks while making the results more accessible. These projects have direct impacts on AI evaluation methodology but we see other opportunities like this for expanding projects at reasonable follow-up costs.
You’ll be supporting a growing organization with the Apart Lab fellowship already doubling from Q1′24 to Q3′24 (17 to 35 fellows) and our research sprints having moved thousands closer to AI safety.
Given current AGI development timelines, the need to scale and improve safety research is urgent. In our view, Apart seems like one of the better investments to reduce AI risk.
If this sounds interesting and you’d like to hear more (or have a specific marginal project you’d like to see happen), my inbox is open.
Results from the AI x Democracy Research Sprint
Very interesting! We had a submission for the evals research sprint in August last year on the same topic. Check it out here: Turing Mirror: Evaluating the ability of LLMs to recognize LLM-generated text (apartresearch.com)
Demonstrate and evaluate risks from AI to society at the AI x Democracy research hackathon
Join the AI Evaluation Tasks Bounty Hackathon
You are completely right. My main point is that the field of AI safety is under-utilizing commercial markets while commercial AI indeed prioritizes reliability and security to a healthy level.
Yep, probably agree with this. Then it’s definitely good to lead a promising researcher away from the bad nichés and into the better ones!