EA megaprojects continued

This post is a continuation/​extension of the EA megaproject article by Nathan Young. Our list is by no means exhaustive, and everyone is welcome to extend it in the comments.

The people who contributed to this article are in no particular order: Simon Grimm, Marius Hobbhahn, Max Räuker, Yannick Mühlhäuser and Jasper Götting.

We would like to thank Aaron Gertler for his feedback and suggestions.

Donating to GiveDirectly is often seen as a scalable baseline for the effectiveness of a project. Another scalable baseline could be investing in clean energy research. We don’t expect all of the following ideas to meet either bar. Some of them might just be interesting or thought-provoking.
The value of these megaprojects depends, among other things, on your risk profile, your views on patient philanthropy and the option value of the idea.

Movement building:

There are many EA organizations in the larger space of “getting more EAs into the right position”, such as 80K, different scholarships, local chapters, etc. However, given that EA is currently more people- than resource-constrained, it seems plausible that there is much room for growth.

Some people have argued to “keep EA small” (see here or here). However, we don’t think it makes sense to limit access to the movement to the extent that EA would stop growing. A plausible future role for EA in society might be similar to science. Everyone broadly knows what science is, and most people have a good opinion of it, but most people aren’t scientists themselves. Most scientific fields are competitive, select for highly talented people, and grow somewhere between linearly and exponentially. The goal should not be to approach as many people as possible (such as religions) but to approach and funnel talented individuals into the movement (such as sports organizations or universities).

Ideas for bigger projects in the space of movement building include:

1. A large global scholarship foundation:

EAs could build a foundation that helps young people with their choice of study while providing financial support, visas and mentorship during their studies. Such a foundation would also yield an extensive, global network of young, talented individuals. Furthermore, such a foundation can scout for talented individuals earlier on through similar events like existing math/​science/​coding/​forecasting tournaments or recruiting in sports. Some of these elements already exist on a smaller and more informal scale, but a global institution would provide increased visibility, prestige, and scaling effects. Furthermore, a scholarship foundation could focus specifically on developing countries and identify talents who could migrate to richer countries. The aim would be to create an organization that lends as much prestige as Fulbright or Rhodes scholarships.

2. Buy a Big Newspaper:

Philanthropists and other wealthy individuals regularly buy or own major news outlets (Jeff Bezos bought WaPo, Rupert Murdoch owns vast swathes of center-right to right-wing media). An affluent EA could buy a trusted newspaper and direct it in a more evidence-based direction, for instance, by incorporating forecasts into reporting or by highlighting certain positive developments that get neglected. Future Perfect has shown how EA-influenced reporting can have a place in a major news outlet, and the Ezra Klein show sometimes uses an EA-framing, but we think there is room for more. This could have a particularly relevant impact in non-English speaking countries.

Science:

1. Fund very large RCTs/​population-wide studies

Conventional RCTs are often too underpowered for certain (often population-wide) relations. Nonetheless, knowing of these relationships can be impactful, as seen in the recent literature on the harms of air pollution. Funding more large-scale studies could uncover further, important knowledge Valuable targets for such an effort might be mental health, impacts of different foods, pollution, fertility, nootropics (cognitive enhancers), effects of agrochemicals, and many more. But, the marginal benefit of such studies might be small, given that governments are already incentivized and interested in most of these research questions.

2. A Max-Planck Society (MPG) for EA research

In 2019 the MPG had a yearly budget of 2.5€ billion, distributed among 86 institutes, i.e. roughly ø30€ Mio. per institute. The MPG institutes attract a lot of international talent and are usually ranked among the best institutions in their area of research. They allow researchers to focus entirely on research, with no teaching or admin requirements. Moreover, MPG institutes provide grants for very long projects—durations of up to 10 years aren’t uncommon. A similar English-speaking institution might be the Institute for Advanced Study.
Setting up 3-5 EA-aligned institutes with a similar model would attract talent, increase prestige, and research output. These institutes could focus on classic EA topics such as longtermism, AI safety, biosecurity, global health and development, animal welfare, and so on.

3. An EA university (suggested by Will MacAskill at EAG2021)

In contrast to a research-focused institute such as the Max-Planck Society, a university would include teaching and a broader range of topics and disciplines. Nonetheless, a university and MPG have substantial overlap.
Given that education and career choice are important aspects of EA, it might make sense to launch an entire university. This university could provide both generalist degrees on all EA subfields while also offering focused degrees on AI alignment, biosecurity, animal welfare, and so on. An EA-funded university could pay higher salaries, attracting highly talented researchers thus increasing its pull popularity among students. Some researchers could also be freed from teaching requirements. Other researchers might be hired explicitly to do high-quality teaching to benefit the students’ education.

4. Fund Professorships focussing on EA relevant topics

This proposal is the smaller, more diversified, and less public version of the EA university idea.

5. A Forecasting Organization (suggested by Will MacAskill at EAG2021)

Create a forecasting institute that employs top-performing forecasters to predict EA-aligned questions. Such an institute could also host, maintain and develop currently existing forecasting platforms (1, 2) since most existing platforms are relatively small and rely on the work of volunteers.

6. Create EA-aligned advance-market commitments (AMC)

AMCs incentivize the development of products by guaranteeing a government or non-profit pay-out or purchase after development. They have a good track record in facilitating vaccine development against neglected diseases, such as pneumococcal disease.
AMCs could be used in cause areas such as biosecurity, incentivizing researchers to develop technologies with little or no existing market. Some examples within biosecurity are antivirals against potential pandemic pathogens (SARS, MERS, Influenza), widespread cheap metagenomic sequencing, needle-free broad-spectrum vaccines, or better PPE (i.e. more comfortable, safer, and better looking) . Furthermore, EA-guided development would enable norm-setting and the prioritization of low-downside technologies (differential technological development). Finally, AMCs could also be used in cause areas such as global health and longevity research, though these do not seem as neglected.

7. Prizes for important technologies and research

Set challenges and pay prices for EA-relevant inventions, similar to the Ansari X Price. Compared to AMCs, these instruments are probably usable for very early-stage work while requiring less funding.

8. Implement a small pilot of the Nucleic Acid Observatory (NAO):

Getting a clearer picture of currently circulating pathogens is one of the most valuable interventions to enable the fast detection and containment of emerging outbreaks, thereby also deterring the intentional use of engineered pathogens.
A significant project enabling this is a proposed Nucleic Acid Observatory. Such an observatory would monitor circulating nucleic acids in the sewage of major cities or in wastewater of central, highly-frequented facilities (e.g. airports, hospitals, train stations).
The pilot NAO outlined in the paper would cover all US ports of entry, costing $700m a year + $700m for the setup). Keeping with the megaprojects figure of $100m, one could launch a smaller pilot covering 1 to 3 coastal states in the US.

9. Purchase high impact academic journals (suggested by Will MacAskill at EAG2021)

Existing incentives within science are not fully aligned with truth-seeking. Traditional research focuses on novelty and significance, while replication studies or negative results aren’t valued. Having an EA-aligned top-tier journal might be able to address some of those problems. However, existing scientific norms are entrenched and hard to solve. We would guess that this idea is among the less effective proposals in this post.

10. Buy promising ML labs

In 2014 DeepMind was acquired by Google for $500 million. DeepMind has considerable leeway, but Google will still influence important decisions. Thus, the impact of Google’s acquisition could be very high if DeepMind continues to be a leading organization on the path towards transformative AI. There might be other companies today who would be interested in signing a similar agreement, only that they instead would agree to steer their work towards contributing to the design of safe and value-aligned AI. They might start working together with existing AI Safety organizations and would more likely contribute to any difficult coordination problems that might come up in the coming decades, e.g. AI races.

Governance:

1. A really large think tank:

There are a couple of EA-aligned think tanks and NGOs in the policy space, but they are all relatively small. For comparison, The Brookings Institution spends over $90 Mio. per year and the RAND corporation spends around $340 Mio. An EA think tank with large research teams could focus on important policy questions and monitor major countries’ legislative processes. Housing a think-tank in the US seems most promising, but one could also create new think tanks in the EU or Asia.
This could also entail developing a deep network of lobbyists working on EA causes. There is reason to believe that well-resourced lobbying efforts can have a significant influence if they are well-targeted. But, the biggest bottleneck for such an organization might not be money but the lack of trusted, senior individuals.

2. Invest in local/​national policy:

There are a couple of structural, political changes that might be worth pursuing from an EA perspective. For example, a ministry of the future, more funding for long-termist science projects, advancing international denuclearisation, proposing very progressive animal welfare laws, etc.
EAs could invest in cities/​political parties or candidates whose success would be better for the world (e.g. who want to build the institutions, implement the laws and start the projects mentioned above). Other topics include improved immigration and housing policies, experimentation with governance mechanisms, basically, everything that makes some EAs excited about charter cities, only that these policies are implemented in existing cities. Again, we are not as excited about this proposal given its low tractability and potential politicization of EA.

Miscellaneous:

1. Buy a coal mine (suggested by Will MacAskill at EAG2021)

Alex Tabarrok of Marginal Revolution stated that one might lease a coal mine for as little as 7.8$ Million. However, this number is misleading as the contract includes an extraction target, making the actual cost much higher.
Nevertheless, when buying or leasing a mine we would aim to then not use it. There are two reasons to do this. First, this would add to climate change prevention. But, more importantly, having a backup coal mine would reduce existential risks. If humanity is victim to a global catastrophe that wipes out civilization (most humans, technology, most trade, etc.) but does not kill everyone, we need easy ways to restart. Reserving some coal in easily accessible locations could make a relaunch of civilization easier, as easily accessible coal could be used to create energy and heat to bootstrap our tech tree.

2. Fund an enormous EA forecasting tournament to find talented forecasters all around the world

A relatively modestly prized forecasting tournament on Metaculus last year was reported on by Forbes (the total prize pool was $50,000). We could easily scale such tournaments, to acquire global scale: Without having talked to people involved in the forecasting community, we believe such a tournament could lead to

a) identifying talented people from all over the world who can think about complex and EA-related issues (analogous to chess/​math/​gaming competitions),
b) highlight EA-related issues to broad public attention, and
c) publicize the idea and value of probabilistic forecasting.
It seems unclear if such tournaments can be scaled without losing valuable features such as incentivizing honest reporting of uncertainty and sharing useful information, as discussed here. It also seems unclear if a better idea is to fund prediction markets and push them towards more questions relevant to EA.

3. Stockpiling Personal protective equipment (PPE):

A rolling stockpile of PPE is very desirable for emerging pandemics, yet most countries were not prepared for the COVID-19 induced surge in demand, resulting in shortages of basic PPE like masks or gloves.

While some countries like the US or UK set up or expanded their stockpiles, it might still be valuable to create an international stockpile for subsets of the population that need to be mobile during any emerging pandemic and do not have access to the existing stockpiles, e.g. in the developing world. Most PPE is cheap, non-perishable, easy to store, and almost entirely pathogen-agnostic. But we think this idea is among the less effective due to existing stockpiles and the high demand for logistics and upkeep.