We’re a non-profit research and community-building lab with a strategic target on high-volume frontier technical research. Apart is currently raising a round to run the lab throughout 2025 and 2026 but here I’ll describe what your marginal donation may enable.
In just two years, Apart Research has established itself as a unique and efficient part of the AI safety ecosystem. Our research output includes 13 peer-reviewed papers published since 2023 at top venues including NeurIPS, ICLR, ACL, and EMNLP, with six main conference papers and nine workshop acceptances. Our work has been cited by OpenAI’s Superalignment team, and our team members have contributed to significant publications like Anthropic’s “Sleeper Agents” paper.
With this track record, we’re able to capitalize on our position as an AI safety lab and mobilize our work to impactful frontiers of technical work in governance, research methodology, and AI control.
Besides our ability to accelerate a Lab fellow’s research career at an average direct cost of around $3k, enable research sprint participants for as little as $30, and enable growth at local groups at similar high price/impact ratios, your marginal donation can enable us to run further impactful projects:
Improved access to our program ($7k-$25k): Professional rewamp of our website and documentation would make our programs and research outputs more accessible to talented researchers worldwide. Besides our establishment as a lab through our paper acceptances, a redesign will help us cater even more to institutional funding and technical professionals, which will help scale our impact through valuable counterfactual funding and talent discovery. At the higher end, we will also be able to make our internal resources publicly available. These resources are specifically designed to accelerate AI safety technical careers.
Higher conference attendance support ($20k): Currently, we only support one fellow per team to attend conferences. Additional funding would enable a second team member to attend, at approximately $2k per person.
Improving worldview diversity in AI safety ($10k-$20k): We’ve been working on all continents now and find a lot of value in our approach to enable international and underrepresented professional talent (besides our work at organizations such as 7 of the top 10 universities). With this funding, you would enable more targeted outreach from Apart’s side and existing lab members’ participation in conferences to discuss and represent AI safety to otherwise underrepresented professional groups.
Continuing impactful research projects ($15k-$30k): We will be able to extend timely and critical research projects. For instance, we’re looking to port our cyber-evaluations work to Inspect, making it a permanent part of UK AISI catastrophic risk evaluations. Our recent paper also finds novel methods to test whether LLMs game public benchmarks and we would like to expand the work to run the same test on other high-impact benchmarks while making the results more accessible. These projects have direct impacts on AI evaluation methodology but we see other opportunities like this for expanding projects at reasonable follow-up costs.
Donate to Apart Research
You’ll be supporting a growing organization with the Apart Lab fellowship already doubling from Q1′24 to Q3′24 (17 to 35 fellows) and our research sprints having moved thousands closer to AI safety.
Given current AGI development timelines, the need to scale and improve safety research is urgent. In our view, Apart seems like one of the better investments to reduce AI risk.
If this sounds interesting and you’d like to hear more (or have a specific marginal project you’d like to see happen), my inbox is open.
Answering on behalf of Apart Research!
We’re a non-profit research and community-building lab with a strategic target on high-volume frontier technical research. Apart is currently raising a round to run the lab throughout 2025 and 2026 but here I’ll describe what your marginal donation may enable.
In just two years, Apart Research has established itself as a unique and efficient part of the AI safety ecosystem. Our research output includes 13 peer-reviewed papers published since 2023 at top venues including NeurIPS, ICLR, ACL, and EMNLP, with six main conference papers and nine workshop acceptances. Our work has been cited by OpenAI’s Superalignment team, and our team members have contributed to significant publications like Anthropic’s “Sleeper Agents” paper.
With this track record, we’re able to capitalize on our position as an AI safety lab and mobilize our work to impactful frontiers of technical work in governance, research methodology, and AI control.
Besides our ability to accelerate a Lab fellow’s research career at an average direct cost of around $3k, enable research sprint participants for as little as $30, and enable growth at local groups at similar high price/impact ratios, your marginal donation can enable us to run further impactful projects:
Donate to Apart ResearchImproved access to our program ($7k-$25k): Professional rewamp of our website and documentation would make our programs and research outputs more accessible to talented researchers worldwide. Besides our establishment as a lab through our paper acceptances, a redesign will help us cater even more to institutional funding and technical professionals, which will help scale our impact through valuable counterfactual funding and talent discovery. At the higher end, we will also be able to make our internal resources publicly available. These resources are specifically designed to accelerate AI safety technical careers.
Higher conference attendance support ($20k): Currently, we only support one fellow per team to attend conferences. Additional funding would enable a second team member to attend, at approximately $2k per person.
Improving worldview diversity in AI safety ($10k-$20k): We’ve been working on all continents now and find a lot of value in our approach to enable international and underrepresented professional talent (besides our work at organizations such as 7 of the top 10 universities). With this funding, you would enable more targeted outreach from Apart’s side and existing lab members’ participation in conferences to discuss and represent AI safety to otherwise underrepresented professional groups.
Continuing impactful research projects ($15k-$30k): We will be able to extend timely and critical research projects. For instance, we’re looking to port our cyber-evaluations work to Inspect, making it a permanent part of UK AISI catastrophic risk evaluations. Our recent paper also finds novel methods to test whether LLMs game public benchmarks and we would like to expand the work to run the same test on other high-impact benchmarks while making the results more accessible. These projects have direct impacts on AI evaluation methodology but we see other opportunities like this for expanding projects at reasonable follow-up costs.
You’ll be supporting a growing organization with the Apart Lab fellowship already doubling from Q1′24 to Q3′24 (17 to 35 fellows) and our research sprints having moved thousands closer to AI safety.
Given current AGI development timelines, the need to scale and improve safety research is urgent. In our view, Apart seems like one of the better investments to reduce AI risk.
If this sounds interesting and you’d like to hear more (or have a specific marginal project you’d like to see happen), my inbox is open.