The Long-Term Future Fund made the following grants as part of its 2021 Q4 grant cycle (grants paid out sometime between August and December 2021):
Total funding distributed: 2,081,577
Number of grantees: 34
Acceptance rate (excluding desk rejections): 54%
Payout date: July—December 2021
Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger
2 of our grantees requested that we not include public reports for their grants. (You can read our policy on public reporting here). We also referred 2 grants, totalling $110,000, to private funders, and approved 3 grants, totalling $102,000, that were later withdrawn by grantees.
If you’re interested in getting funding from the Long-Term Future Fund, apply here.
(Note: The initial sections of this post were written by me, Asya Bergal.)
Other updates
Our grant volume and overall giving increased significantly in 2021 (and in 2022 – to be featured in a later payout report). In the second half of 2021, we applied for funding from larger institutional funders to make sure we could make all the grants that we thought were above the bar for longtermist spending. We received two large grants at the end of 2021:
Going forward, my guess is that donations from smaller funders will be insufficient to support our grantmaking, and we’ll mainly be relying on larger funders.
More grants and limited fund manager time mean that the write-ups in this report are shorter than our write-ups have been traditionally. I think communicating publicly about our decision-making process continues to be valuable for the overall ecosystem, so in future reports, we’re likely to continue writing short one-sentence summaries for most of our grants, and more for larger grants or grants that we think are particularly interesting.
Highlights
Here are some of the public grants from this round that I thought looked most exciting ex ante:
$50,000 to support John Wentworth’s AI alignment research. We’ve written about John Wentworth’s work in the past here. (Note: We recommended this grant to a private funder, rather than funding it through LTFF donations.)
$18,000 to support Nicholas Whitaker doing blogging and movement building at the intersection of EA / longtermism and Progress Studies. The Progress Studies community is adjacent to the longtermism community, and is one of a small number of communities thinking carefully about the long-term future. I think having more connections between the two is likely to be good both from an epistemic and a talent pipeline perspective. Nick had strong references and seemed well-positioned to do this work, as the co-founder and editor of the Works in Progress magazine.
$60,000 to support Peter Hartree pursuing independent study, plus a few “special projects”. Peter has done good work for 80K for several years, received very strong references, and has an impressive history of independent projects, including Inbox When Ready.
Grant Recipients
In addition to the grants described below, 2 grants have been excluded from this report at the request of the applicants.
Note: Some of the grants below include detailed descriptions of our grantees. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees’ preferences.
We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for funding.
Grants evaluated by Evan Hubinger
EA Switzerland/PIBBSS Fellowship ($305,000): A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research
This is funding for the PIBBSS Fellowship, a new AI safety fellowship program aimed at promoting alignment-relevant interdisciplinary work. The central goal of PIBBSS is to connect candidates with strong interdisciplinary (e.g. not traditional AI) backgrounds to mentors in AI safety to work on interdisciplinary projects selected by those mentors (e.g. exploring connections between evolution and AI safety).
We decided to fund this program primarily based on the strong selection of mentors excited about participating. However, we did have some reservations—primarily that, by targeting candidates with strong interdisciplinary backgrounds but not necessarily much background in EA or AI safety, we were somewhat concerned that such candidates might not stick around and continue doing good AI safety work after the program. However, we decided that it was worth pursuing this avenue regardless, given that such interdisciplinary talent is very much needed, and at least to get information on how effectively we can retain such talent.
Berkeley Existential Risk Initiative ($250,000): 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms
This is funding for Prof. Philip Thomas to hire a research engineer to create a library for easily using Seldonian machine learning algorithms. I think that the Seldonian framework, compared to many other ways of thinking about machine learning algorithms, centers real safety concerns in a useful way. Though I am less excited about the particular Seldonian algorithms that currently exist, I am excited about Prof. Thomas continuing to push the general Seldonian framework and this seems like a reasonably good way to do so.
The biggest caveat with this grant was primarily that Prof. Thomas had very little experience hiring and managing research engineers, suggesting that it might be quite difficult for him to actually turn this grant into productive engineering work. However, both BERI and I have provided Prof. Thomas with some assistance in this domain, and I am hopeful that this grant will end up producing good work.
John Wentworth ($50,000): 6-month salary for general research
I have been consistently impressed with John Wentworth’s AI safety work, just as I have been when we funded him in the past. Though this grant is more open-ended than previous grants we’ve made to John, I think John is an experienced enough AI safety researcher that general, open-ended research is something I am absolutely excited about him doing.
Note: We recommended this grant to a private funder, rather than funding it through LTFF donations. At the time, we believed that the general nature of the grant might include work outside of the scope of what we are able to fund as a charitable organization, but we intend to make similar grants through EA Funds going forward.
Anonymous ($44,552): Supplement to 3-month Open Phil grant, working on skilling up in AI alignment infrastructure.
This grant is to support a couple of promising candidates working on AI safety infrastructure/operations/community projects, supplementing funding that one of them previously received from Open Phil. This grant was referred to us by the EA Infrastructure Fund and funded by us largely on their recommendation.
Anonymous ($30,000): Additional funding to free up time for technical AI safety research.
This funding is general support for helping a technical AI safety researcher whose work I’ve been excited about improve their productivity. I think that many people doing good work in this space are currently underinvesting in improving their own productivity. If we can alleviate that in this case by providing extra funding, I think that’s a pretty good thing for us to be doing.
David Reber ($20,000): 9.5 months of strategic outsourcing to read up on AI Safety and find mentors
This funding is to help David improve his productivity and free up time to read up on AI safety while pursuing his AI PhD at Columbia. I think that these are valuable things for David to be doing and I think it will increase his odds of being able to contribute meaningfully to AI safety in the future. That being said, we decided to only fund David’s productivity improvements and not fund a teaching buyout for David during his PhD, as we decided that teaching was likely to be somewhat valuable to David at this point in his career and we were unsure enough about his own work at this point in his career for us to decide that a full teaching buyout didn’t make sense.
Adam Shimi ($17,355): Slack money for increased productivity in AI Alignment research
Adam Shimi has been doing independent AI safety research under a previous grant from us, but has found that he is tight on funding and could improve his productivity by receiving an additional top-up grant. Given that we continue to be excited by Adam’s research, and he thinks that the extra funding would be helpful for his productivity, I think this is a very robustly good grant.
Grants evaluated by Asya Bergal
Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. (In particular, getting funding from the Long-Term Future Fund should not be read as an indication that the applicant has a greater chance of receiving funding from Open Philanthropy, and not receiving funding from the Long-Term Future Fund [or any risks and reservations noted in the public payout report] should not be read as an indication that the applicant has a smaller chance of receiving funding from Open Philanthropy.)
Kristaps Zilgalvis ($250,000): Funding for a degree in the Biological Sciences at UCSD (University of California San Diego).
Kristaps, who was based in Belgium, applied for funding to cover 4 years of tuition, housing, and dining fees at UCSD in the U.S., with the ultimate goal of reducing biological existential risk. Kristaps had worked previously with a long-term-future-focused biosecurity researcher, who gave him a positive reference, and demonstrated reasonable understanding of long-term biorisk considerations in my conversation with him.
It’s generally very difficult for international students to find support for degrees in the U.S., and Kristaps had indicated in his application that his counterfactual would be to either work for a few years to make money to pay for his degree, or to take out a substantial student loan.
In general, my guess is that going to a good US or UK university increases future impact in expectation, both by boosting someone’s career, and by putting them in closer proximity for longer with other people working on the long-term future. I think this case is stronger the better the university, the closer the university is to a key geographic hub, and the more students at the university itself are thinking seriously about the long-term future. I made the call to fund here partially by referencing this ranking site, which put UCSD 10th in Biological Sciences degrees worldwide.
I’m somewhat worried that funding undergraduate degrees is unusually likely to attract applicants who feign interest in a priority cause.
Noemi Dreksler ($99,550.89 ): Two-year funding to run public and expert surveys on AI governance and forecasting.
Noemi applied for funding to design, conduct, analyze, and write up survey research for the Centre for the Governance of AI, which hadn’t yet been set up at the time of the application. From her application:
> “Ongoing projects include a large-scale cross-cultural survey of the public’s .AI views (follow-up to Zhang & Dafoe, 2019), the analysis and dissemination of an AI researcher survey (Zwetsloot et al., 2021; Zhang et al., 2021), and a survey of economists’ views of AI/HLMI and related economic forecasts. Future work might include e.g., eliciting expert views and forecasts on AI from a variety of epistemic communities (e.g., policy-makers, AI researchers, AI ethics experts) through surveys and a study of the role of anthropomorphism and mind perception in attitudes towards AI governance.”
I wanted to make this grant because I was interested in some of the concrete surveys being conducted (particularly the economist survey), and also overall like the model of having someone “specialize” in conducting AI-relevant surveys – ideally, a dedicated survey-runner could become unusually efficient at running surveys, and make it cheap for others to request survey data when it was decision-relevant for them.
Anonymous ($90,000): 6-month salary to do AI alignment research.
This was funding for someone with a strong track record in AI alignment to work independently for 6 months.
William D’Alessandro ($22,570): Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022.
William Bradshaw ($16,456): Funding to cover a visit to Boston (via a stopover in another country as required by US coronavirus restrictions at the time) for biosecurity work on the Nucleic Acid Observatory and other biosecurity projects in the Esvelt group.
George Green ($11,400): Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software.
James Smith ($8,324): Time costs over 6 months to publish a paper on the interaction of open science practices and biorisk.
Anonymous ($5,585): 3-month salary to set up a new x-risk relevant project over the upcoming year.
Chelsea Liang ($5,000): 3+ months’ compensation to drive time-sensitive policy paper: Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance’.
Grants evaluated by Adam Gleave
Chad DeChant ($90,000): Funding to finish AI-safety related CS PhD on enabling AI asagents to accurately report their actions
Chad DeChant is a final-year CS PhD candidate at Columbia, working on enabling AI agents to report on and summarize their actions in natural language. He recently switched advisor to Daniel Bauer to pursue this topic, but unfortunately Daniel was unable to support him on a grant. This funding allows Chad to complete his CS PhD.
Chad has previously taught a course on AI Safety, Ethics and Policy. He is interested in pursuing a career combining technical AI safety with policy and governance, which I think Chad is a good fit for. Completing a PhD is a prerequisite for many of these relevant positions, so it seems worth enabling Chad to complete the program. Additionally, I think it is plausible that his current research direction will help with long-term AI safety.
The Center for Election Science ($50,000): General support for campaigns to adopt approval voting at local levels in the US (Note: we had originally included an outdated version of this write-up in this post; we’ve now updated this.)
Plurality voting, where electors vote for a single candidate from a list and the one with the most votes wins, is by far the most common voting system worldwide. Yet it is widely agreed by social choice theorists to be one of the worst voting systems, leading to random outcomes and often favoring extreme candidates. The Center for Election Science (CES) campaigns to adopt approval voting in the US, where voters can pick every candidate they “approve” of and the one with most approval wins.
I’m not sure whether approval voting is better than alternatives like ranked choice voting: my sense is approval voting has nicer theoretical properties and is backed by lab experiments. However, ranked choice voting has been battle-tested in more political situations. Both of them are, however, much better than plurality voting.
If implemented, approval voting could result in politicians being elected that more reliably reflect popular opinion and, in particular, favor candidates that appeal to a broad base. This seems likely to improve political stability and institutional decision-making, which seems robustly positive for the long-term. However, it’s not without its pitfalls. For example, perhaps an extreme candidate winning occasionally helps “reset” government and keep it dynamic. Approval voting is likely to avoid those extreme candidates who don’t have sufficient support.
CES has won ballot initiatives in Fargo, ND (population 125k) and St Louis, MI (population 300k), at an average cost of $10 per voter. They have also organised a nationwide chapter system, outreach campaigns and a small research department. I’m confident they can replicate this success in other cities in the US, and think it’s plausible they can scale to get approval voting used in some state gubernatorial races.
However, from a longtermist perspective, most local governments are of limited importance – what matters is mostly the decision of the US and other influential nation states, and some key international bodies.
CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far. Additionally, they may be able to influence presidential primaries. Parties have significant leeway here, with substantial variation between states.
Influencing presidential elections would be significantly harder. Plurality and approval voting give effectively the same outcome in two-candidate races, which US presidential elections currently are de facto. If all states adopted approval voting, then presidential races could include a broader range of candidates. The best option is likely an interstate compact to adopt a national popular approval vote, which would require only a majority of states to adopt it.
I find the most plausible path to (long-term) impact being that CES continues to switch local jurisdictions to approval voting, and that this provides enough real-world demonstration of approval voting’s value that new international institutions or nation states adopt it. Improving the composition of the Senate and House is also likely to provide some benefit, but I judge it to be smaller.
Prof Nick Wilson ($27,000): Fund a research fellow to identify island societies likely to survive sun-blocking catastrophes and optimising that chance of survival
Nick Wilson is a Professor in Public Health at the University of Otago. We funded him to hire a research assistant for a paper investigating possible island refuges for sun-blocking agricultural catastrophes. Such catastrophes are both plausible (e.g. from nuclear war or volcanic eruption) and reasonably neglected. The study has now been completed and the findings are covered in a long post on the EA Forum. More detailed articles have been submitted to journals, but the preprints are now available (for the main study, and another study of food self-sufficiency in New Zealand). The key findings were that some locations could likely produce enough food in a nuclear winter to keep feeding their populations, but food supply alone does not guarantee flourishing of technological society if trade is seriously disrupted.
Benedikt Hoeltgen ($19,020): 10-month salary for research on AI safety/alignment, focusing on scaling laws or interpretability.
We are funding Benedikt to work on technical AI safety research with Sören Mindermann and Jan Brauner in Yarin Gal’s group at Oxford. Benedikt published several papers in philosophy during his undergraduate degree, and switched to ML research during his Master’s in Computer Science after speaking to 80,000 Hours. I think Benedikt has a promising career ahead of him, and that this research experience will help him get into top PhD programs or other research-focused positions.
Anonymous (pseudonym Gurkenglas) ($14,125): 3-month salary to produce an interpretability tool that illustrates the function of a network’s modules.
Understanding how neural networks work will help with AI safety by letting us audit networks prior to deployment, better understand the kind of representations they tend to learn, and potentially use them as part of a human-in-the-loop training process. The applicant has proposed a novel approach to interpretability based around computing the invariances of a neuron – what other inputs produce the same activations– and detecting modules in a neural network. While I consider this direction to be somewhat speculative, it seems interesting enough to be worth funding and to renew if the results show promise.
Anonymous ($8,000): 5-month salary top-up to plug hole in finances while finishing PhD in AI governance.
The grantee is pursuing a PhD on a topic related to AI governance. They have a temporary hole in finances due to a low PhD stipend coupled with high living expenses in their current location. I have heard positive things about their work from experts in the field, so I think it is worth providing them with this relatively small supplement to ensure financial limitations do not hamper their productivity.
Anson Ho ($4,800): 3-month funding for a project analysing AI takeoff speed
Anson is a recent Physics graduate from St Andrews. We are funding them to work with Vael Gates (Stanford post-doc) to study AI takeoff speed and continuity. While this topic has been studied shallowly in the past, I think there is still plenty of room for future work. Anson is new to AI strategy research but has a strong STEM background (first-class degree from a top university) and has done some self-studying on AI (e.g. attended EA Cambridge’s AGI Safety course), and so seems to have a good chance of making progress on this important problem.
Grants evaluated by Oliver Habryka
David Manheim ($70,000): 6-month salary to continue work on biorisk and policy, and to set up a longtermist organization in Israel.
We’ve given multiple grants to David in the past (example). In this case, David was planning to work with FHI, but FHI was unable to pay him for his time. To enable him to continue doing work on longtermist policy, we offered to cover his salary at ALTER, the new organization he has set up. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for his time otherwise.
so we offered to cover his salary. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for this.
Peter Hartree ($60,000): 6-month salary to pursue independent study, plus a few “special projects”.
Peter Hartree worked at 80,000 Hours for multiple years, and was interested in exploring a broader career shift – to take more time to study and think about core longtermist problem areas.
He received great references from his colleagues at 80k, and I am generally in favor of people at EA organizations reconsidering their career trajectory once in a while and being financially supported while doing so (especially given that current salaries at most EA organizations make building runway for this kind of reflection hard).
Note: We recommended this grant to a private funder, rather than funding it through LTFF donations, since “independent study” is sometimes hard to prove public benefit for.
Aysajan Eziz (officially, Aishajiang Aizezikali) ($45,000): 9-month salary for an apprenticeship in solving problems-we-don’t-understand.
Aysajan is apprenticing to John Wentworth, whose work we’ve funded in the past, and whose work currently seems like some of the most promising AI Alignment research that is currently being produced. In this case, I had little information on Aysajan, but was excited about more people working with Wentworth on his research, which seemed like a good bet.
Nicholas (Nick) Whitaker ($18,000): 3 months of blogging and movement building at the intersection of EA/longtermism and Progress Studies
David Rhys Bernard ($11,700): 4-month salary for research assistant to help with surrogate outcomes project on estimating long-term effects
Effective Altruism Sweden ($4,562): Funding a Nordic conference for senior X-risk researchers and junior talents interested in entering the field
Benjamin Stewart ($2,230): 6-week salary for self-study in data science and forecasting, to upskill within a GCBR research career
Caroline Jeanmaire ($121,672): Two-year funding for a top-tier PhD in Public Policy in Europe with a focus on promoting AI safety
Logan McNichols ($3,200): Funding to pay participants to test a forecasting training program
The core principle of the program is to realistically simulate normal forecasting, but on questions which have already been resolved (backcasting). This creates the possibility of rapid feedback. The answer can be revealed immediately after a backcast is made, whereas forecasts are often made on questions which take months or years to resolve. The fundamental challenge of backcasting is gathering information without gaining an unfair advantage or accidentally stumbling on the answer. This project addresses the challenge in a simple way: by forming teams of two, an information gatherer and a forecaster.
Since this was a small grant, we didn’t evaluate this grant in a lot of depth. The basic idea seemed reasonable to me, and seemed like it might indeed improve training for people who want to get better at forecasting.
Long-Term Future Fund: December 2021 grant recommendations
Introduction
The Long-Term Future Fund made the following grants as part of its 2021 Q4 grant cycle (grants paid out sometime between August and December 2021):
Total funding distributed: 2,081,577
Number of grantees: 34
Acceptance rate (excluding desk rejections): 54%
Payout date: July—December 2021
Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger
2 of our grantees requested that we not include public reports for their grants. (You can read our policy on public reporting here). We also referred 2 grants, totalling $110,000, to private funders, and approved 3 grants, totalling $102,000, that were later withdrawn by grantees.
If you’re interested in getting funding from the Long-Term Future Fund, apply here.
(Note: The initial sections of this post were written by me, Asya Bergal.)
Other updates
Our grant volume and overall giving increased significantly in 2021 (and in 2022 – to be featured in a later payout report). In the second half of 2021, we applied for funding from larger institutional funders to make sure we could make all the grants that we thought were above the bar for longtermist spending. We received two large grants at the end of 2021:
$1,417,000 from the Survival and Flourishing Fund’s 2021-H2 S-process round
$2,583,000 from Open Philanthropy
Going forward, my guess is that donations from smaller funders will be insufficient to support our grantmaking, and we’ll mainly be relying on larger funders.
More grants and limited fund manager time mean that the write-ups in this report are shorter than our write-ups have been traditionally. I think communicating publicly about our decision-making process continues to be valuable for the overall ecosystem, so in future reports, we’re likely to continue writing short one-sentence summaries for most of our grants, and more for larger grants or grants that we think are particularly interesting.
Highlights
Here are some of the public grants from this round that I thought looked most exciting ex ante:
$50,000 to support John Wentworth’s AI alignment research. We’ve written about John Wentworth’s work in the past here. (Note: We recommended this grant to a private funder, rather than funding it through LTFF donations.)
$18,000 to support Nicholas Whitaker doing blogging and movement building at the intersection of EA / longtermism and Progress Studies. The Progress Studies community is adjacent to the longtermism community, and is one of a small number of communities thinking carefully about the long-term future. I think having more connections between the two is likely to be good both from an epistemic and a talent pipeline perspective. Nick had strong references and seemed well-positioned to do this work, as the co-founder and editor of the Works in Progress magazine.
$60,000 to support Peter Hartree pursuing independent study, plus a few “special projects”. Peter has done good work for 80K for several years, received very strong references, and has an impressive history of independent projects, including Inbox When Ready.
Grant Recipients
In addition to the grants described below, 2 grants have been excluded from this report at the request of the applicants.
Note: Some of the grants below include detailed descriptions of our grantees. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees’ preferences.
We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for funding.
Grants evaluated by Evan Hubinger
EA Switzerland/PIBBSS Fellowship ($305,000): A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research
This is funding for the PIBBSS Fellowship, a new AI safety fellowship program aimed at promoting alignment-relevant interdisciplinary work. The central goal of PIBBSS is to connect candidates with strong interdisciplinary (e.g. not traditional AI) backgrounds to mentors in AI safety to work on interdisciplinary projects selected by those mentors (e.g. exploring connections between evolution and AI safety).
We decided to fund this program primarily based on the strong selection of mentors excited about participating. However, we did have some reservations—primarily that, by targeting candidates with strong interdisciplinary backgrounds but not necessarily much background in EA or AI safety, we were somewhat concerned that such candidates might not stick around and continue doing good AI safety work after the program. However, we decided that it was worth pursuing this avenue regardless, given that such interdisciplinary talent is very much needed, and at least to get information on how effectively we can retain such talent.
Berkeley Existential Risk Initiative ($250,000): 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms
This is funding for Prof. Philip Thomas to hire a research engineer to create a library for easily using Seldonian machine learning algorithms. I think that the Seldonian framework, compared to many other ways of thinking about machine learning algorithms, centers real safety concerns in a useful way. Though I am less excited about the particular Seldonian algorithms that currently exist, I am excited about Prof. Thomas continuing to push the general Seldonian framework and this seems like a reasonably good way to do so.
The biggest caveat with this grant was primarily that Prof. Thomas had very little experience hiring and managing research engineers, suggesting that it might be quite difficult for him to actually turn this grant into productive engineering work. However, both BERI and I have provided Prof. Thomas with some assistance in this domain, and I am hopeful that this grant will end up producing good work.
John Wentworth ($50,000): 6-month salary for general research
I have been consistently impressed with John Wentworth’s AI safety work, just as I have been when we funded him in the past. Though this grant is more open-ended than previous grants we’ve made to John, I think John is an experienced enough AI safety researcher that general, open-ended research is something I am absolutely excited about him doing.
Note: We recommended this grant to a private funder, rather than funding it through LTFF donations. At the time, we believed that the general nature of the grant might include work outside of the scope of what we are able to fund as a charitable organization, but we intend to make similar grants through EA Funds going forward.
Anonymous ($44,552): Supplement to 3-month Open Phil grant, working on skilling up in AI alignment infrastructure.
This grant is to support a couple of promising candidates working on AI safety infrastructure/operations/community projects, supplementing funding that one of them previously received from Open Phil. This grant was referred to us by the EA Infrastructure Fund and funded by us largely on their recommendation.
Anonymous ($30,000): Additional funding to free up time for technical AI safety research.
This funding is general support for helping a technical AI safety researcher whose work I’ve been excited about improve their productivity. I think that many people doing good work in this space are currently underinvesting in improving their own productivity. If we can alleviate that in this case by providing extra funding, I think that’s a pretty good thing for us to be doing.
David Reber ($20,000): 9.5 months of strategic outsourcing to read up on AI Safety and find mentors
This funding is to help David improve his productivity and free up time to read up on AI safety while pursuing his AI PhD at Columbia. I think that these are valuable things for David to be doing and I think it will increase his odds of being able to contribute meaningfully to AI safety in the future. That being said, we decided to only fund David’s productivity improvements and not fund a teaching buyout for David during his PhD, as we decided that teaching was likely to be somewhat valuable to David at this point in his career and we were unsure enough about his own work at this point in his career for us to decide that a full teaching buyout didn’t make sense.
Adam Shimi ($17,355): Slack money for increased productivity in AI Alignment research
Adam Shimi has been doing independent AI safety research under a previous grant from us, but has found that he is tight on funding and could improve his productivity by receiving an additional top-up grant. Given that we continue to be excited by Adam’s research, and he thinks that the extra funding would be helpful for his productivity, I think this is a very robustly good grant.
Grants evaluated by Asya Bergal
Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. (In particular, getting funding from the Long-Term Future Fund should not be read as an indication that the applicant has a greater chance of receiving funding from Open Philanthropy, and not receiving funding from the Long-Term Future Fund [or any risks and reservations noted in the public payout report] should not be read as an indication that the applicant has a smaller chance of receiving funding from Open Philanthropy.)
Kristaps Zilgalvis ($250,000): Funding for a degree in the Biological Sciences at UCSD (University of California San Diego).
Kristaps, who was based in Belgium, applied for funding to cover 4 years of tuition, housing, and dining fees at UCSD in the U.S., with the ultimate goal of reducing biological existential risk. Kristaps had worked previously with a long-term-future-focused biosecurity researcher, who gave him a positive reference, and demonstrated reasonable understanding of long-term biorisk considerations in my conversation with him.
It’s generally very difficult for international students to find support for degrees in the U.S., and Kristaps had indicated in his application that his counterfactual would be to either work for a few years to make money to pay for his degree, or to take out a substantial student loan.
In general, my guess is that going to a good US or UK university increases future impact in expectation, both by boosting someone’s career, and by putting them in closer proximity for longer with other people working on the long-term future. I think this case is stronger the better the university, the closer the university is to a key geographic hub, and the more students at the university itself are thinking seriously about the long-term future. I made the call to fund here partially by referencing this ranking site, which put UCSD 10th in Biological Sciences degrees worldwide.
I’m somewhat worried that funding undergraduate degrees is unusually likely to attract applicants who feign interest in a priority cause.
Noemi Dreksler ($99,550.89 ): Two-year funding to run public and expert surveys on AI governance and forecasting.
Noemi applied for funding to design, conduct, analyze, and write up survey research for the Centre for the Governance of AI, which hadn’t yet been set up at the time of the application. From her application:
> “Ongoing projects include a large-scale cross-cultural survey of the public’s .AI views (follow-up to Zhang & Dafoe, 2019), the analysis and dissemination of an AI researcher survey (Zwetsloot et al., 2021; Zhang et al., 2021), and a survey of economists’ views of AI/HLMI and related economic forecasts. Future work might include e.g., eliciting expert views and forecasts on AI from a variety of epistemic communities (e.g., policy-makers, AI researchers, AI ethics experts) through surveys and a study of the role of anthropomorphism and mind perception in attitudes towards AI governance.”
I wanted to make this grant because I was interested in some of the concrete surveys being conducted (particularly the economist survey), and also overall like the model of having someone “specialize” in conducting AI-relevant surveys – ideally, a dedicated survey-runner could become unusually efficient at running surveys, and make it cheap for others to request survey data when it was decision-relevant for them.
Anonymous ($90,000): 6-month salary to do AI alignment research.
This was funding for someone with a strong track record in AI alignment to work independently for 6 months.
William D’Alessandro ($22,570): Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022.
William Bradshaw ($16,456): Funding to cover a visit to Boston (via a stopover in another country as required by US coronavirus restrictions at the time) for biosecurity work on the Nucleic Acid Observatory and other biosecurity projects in the Esvelt group.
George Green ($11,400): Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software.
James Smith ($8,324): Time costs over 6 months to publish a paper on the interaction of open science practices and biorisk.
Anonymous ($5,585): 3-month salary to set up a new x-risk relevant project over the upcoming year.
Chelsea Liang ($5,000): 3+ months’ compensation to drive time-sensitive policy paper: Managing the Transition to Widespread Metagenomic Monitoring: Policy Considerations for Future Biosurveillance’.
Grants evaluated by Adam Gleave
Chad DeChant ($90,000): Funding to finish AI-safety related CS PhD on enabling AI asagents to accurately report their actions
Chad DeChant is a final-year CS PhD candidate at Columbia, working on enabling AI agents to report on and summarize their actions in natural language. He recently switched advisor to Daniel Bauer to pursue this topic, but unfortunately Daniel was unable to support him on a grant. This funding allows Chad to complete his CS PhD.
Chad has previously taught a course on AI Safety, Ethics and Policy. He is interested in pursuing a career combining technical AI safety with policy and governance, which I think Chad is a good fit for. Completing a PhD is a prerequisite for many of these relevant positions, so it seems worth enabling Chad to complete the program. Additionally, I think it is plausible that his current research direction will help with long-term AI safety.
The Center for Election Science ($50,000): General support for campaigns to adopt approval voting at local levels in the US (Note: we had originally included an outdated version of this write-up in this post; we’ve now updated this.)
Plurality voting, where electors vote for a single candidate from a list and the one with the most votes wins, is by far the most common voting system worldwide. Yet it is widely agreed by social choice theorists to be one of the worst voting systems, leading to random outcomes and often favoring extreme candidates. The Center for Election Science (CES) campaigns to adopt approval voting in the US, where voters can pick every candidate they “approve” of and the one with most approval wins.
I’m not sure whether approval voting is better than alternatives like ranked choice voting: my sense is approval voting has nicer theoretical properties and is backed by lab experiments. However, ranked choice voting has been battle-tested in more political situations. Both of them are, however, much better than plurality voting.
If implemented, approval voting could result in politicians being elected that more reliably reflect popular opinion and, in particular, favor candidates that appeal to a broad base. This seems likely to improve political stability and institutional decision-making, which seems robustly positive for the long-term. However, it’s not without its pitfalls. For example, perhaps an extreme candidate winning occasionally helps “reset” government and keep it dynamic. Approval voting is likely to avoid those extreme candidates who don’t have sufficient support.
CES has won ballot initiatives in Fargo, ND (population 125k) and St Louis, MI (population 300k), at an average cost of $10 per voter. They have also organised a nationwide chapter system, outreach campaigns and a small research department. I’m confident they can replicate this success in other cities in the US, and think it’s plausible they can scale to get approval voting used in some state gubernatorial races.
However, from a longtermist perspective, most local governments are of limited importance – what matters is mostly the decision of the US and other influential nation states, and some key international bodies.
CES may be able to have influence at the federal level by changing state-level voting rules on how senators and representatives are elected. This is not something they have accomplished yet, but would be a fairly natural extension of the work they have done so far. Additionally, they may be able to influence presidential primaries. Parties have significant leeway here, with substantial variation between states.
Influencing presidential elections would be significantly harder. Plurality and approval voting give effectively the same outcome in two-candidate races, which US presidential elections currently are de facto. If all states adopted approval voting, then presidential races could include a broader range of candidates. The best option is likely an interstate compact to adopt a national popular approval vote, which would require only a majority of states to adopt it.
I find the most plausible path to (long-term) impact being that CES continues to switch local jurisdictions to approval voting, and that this provides enough real-world demonstration of approval voting’s value that new international institutions or nation states adopt it. Improving the composition of the Senate and House is also likely to provide some benefit, but I judge it to be smaller.
Prof Nick Wilson ($27,000): Fund a research fellow to identify island societies likely to survive sun-blocking catastrophes and optimising that chance of survival
Nick Wilson is a Professor in Public Health at the University of Otago. We funded him to hire a research assistant for a paper investigating possible island refuges for sun-blocking agricultural catastrophes. Such catastrophes are both plausible (e.g. from nuclear war or volcanic eruption) and reasonably neglected. The study has now been completed and the findings are covered in a long post on the EA Forum. More detailed articles have been submitted to journals, but the preprints are now available (for the main study, and another study of food self-sufficiency in New Zealand). The key findings were that some locations could likely produce enough food in a nuclear winter to keep feeding their populations, but food supply alone does not guarantee flourishing of technological society if trade is seriously disrupted.
Benedikt Hoeltgen ($19,020): 10-month salary for research on AI safety/alignment, focusing on scaling laws or interpretability.
We are funding Benedikt to work on technical AI safety research with Sören Mindermann and Jan Brauner in Yarin Gal’s group at Oxford. Benedikt published several papers in philosophy during his undergraduate degree, and switched to ML research during his Master’s in Computer Science after speaking to 80,000 Hours. I think Benedikt has a promising career ahead of him, and that this research experience will help him get into top PhD programs or other research-focused positions.
Anonymous (pseudonym Gurkenglas) ($14,125): 3-month salary to produce an interpretability tool that illustrates the function of a network’s modules.
Understanding how neural networks work will help with AI safety by letting us audit networks prior to deployment, better understand the kind of representations they tend to learn, and potentially use them as part of a human-in-the-loop training process. The applicant has proposed a novel approach to interpretability based around computing the invariances of a neuron – what other inputs produce the same activations– and detecting modules in a neural network. While I consider this direction to be somewhat speculative, it seems interesting enough to be worth funding and to renew if the results show promise.
Anonymous ($8,000): 5-month salary top-up to plug hole in finances while finishing PhD in AI governance.
The grantee is pursuing a PhD on a topic related to AI governance. They have a temporary hole in finances due to a low PhD stipend coupled with high living expenses in their current location. I have heard positive things about their work from experts in the field, so I think it is worth providing them with this relatively small supplement to ensure financial limitations do not hamper their productivity.
Anson Ho ($4,800): 3-month funding for a project analysing AI takeoff speed
Anson is a recent Physics graduate from St Andrews. We are funding them to work with Vael Gates (Stanford post-doc) to study AI takeoff speed and continuity. While this topic has been studied shallowly in the past, I think there is still plenty of room for future work. Anson is new to AI strategy research but has a strong STEM background (first-class degree from a top university) and has done some self-studying on AI (e.g. attended EA Cambridge’s AGI Safety course), and so seems to have a good chance of making progress on this important problem.
Grants evaluated by Oliver Habryka
David Manheim ($70,000): 6-month salary to continue work on biorisk and policy, and to set up a longtermist organization in Israel.
We’ve given multiple grants to David in the past (example). In this case, David was planning to work with FHI, but FHI was unable to pay him for his time. To enable him to continue doing work on longtermist policy, we offered to cover his salary at ALTER, the new organization he has set up. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for his time otherwise.
so we offered to cover his salary. I did not evaluate this grant in great depth, given that FHI would have been happy to pay for this.
Peter Hartree ($60,000): 6-month salary to pursue independent study, plus a few “special projects”.
Peter Hartree worked at 80,000 Hours for multiple years, and was interested in exploring a broader career shift – to take more time to study and think about core longtermist problem areas.
He received great references from his colleagues at 80k, and I am generally in favor of people at EA organizations reconsidering their career trajectory once in a while and being financially supported while doing so (especially given that current salaries at most EA organizations make building runway for this kind of reflection hard).
Note: We recommended this grant to a private funder, rather than funding it through LTFF donations, since “independent study” is sometimes hard to prove public benefit for.
Aysajan Eziz (officially, Aishajiang Aizezikali) ($45,000): 9-month salary for an apprenticeship in solving problems-we-don’t-understand.
Aysajan is apprenticing to John Wentworth, whose work we’ve funded in the past, and whose work currently seems like some of the most promising AI Alignment research that is currently being produced. In this case, I had little information on Aysajan, but was excited about more people working with Wentworth on his research, which seemed like a good bet.
Nicholas (Nick) Whitaker ($18,000): 3 months of blogging and movement building at the intersection of EA/longtermism and Progress Studies
David Rhys Bernard ($11,700): 4-month salary for research assistant to help with surrogate outcomes project on estimating long-term effects
Effective Altruism Sweden ($4,562): Funding a Nordic conference for senior X-risk researchers and junior talents interested in entering the field
Benjamin Stewart ($2,230): 6-week salary for self-study in data science and forecasting, to upskill within a GCBR research career
Caroline Jeanmaire ($121,672): Two-year funding for a top-tier PhD in Public Policy in Europe with a focus on promoting AI safety
Logan McNichols ($3,200): Funding to pay participants to test a forecasting training program
The core principle of the program is to realistically simulate normal forecasting, but on questions which have already been resolved (backcasting). This creates the possibility of rapid feedback. The answer can be revealed immediately after a backcast is made, whereas forecasts are often made on questions which take months or years to resolve. The fundamental challenge of backcasting is gathering information without gaining an unfair advantage or accidentally stumbling on the answer. This project addresses the challenge in a simple way: by forming teams of two, an information gatherer and a forecaster.
Since this was a small grant, we didn’t evaluate this grant in a lot of depth. The basic idea seemed reasonable to me, and seemed like it might indeed improve training for people who want to get better at forecasting.