Big List of Cause Candidates
Many thanks to Ozzie Gooen for suggesting this project, to Marta Krzeminska for editing help and to Michael Aird and others for various comments. In early 2022, Leo updated the list with new candidates suggested before March 2022, in this post—these have now been incorporated into the main post.
In the last few years, there have been many dozens of posts about potential new EA cause areas, causes and interventions. Searching for new causes seems like a worthy endeavour, but on their own, the submissions can be quite scattered and chaotic. Collecting and categorizing these cause candidates seemed like a clear next step.
At the time we first published this post, we —Ozzie Gooen of the Quantified Uncertainty Research Institute and I—noted that we might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we felt like the original list itself could be useful already. Since then, we haven’t carried out forecasting experiments, but we have once updated the list as noted above.
Below is the current list with suggestions up to March 2022. It has a simple categorization, as well as an occasional short summary which paraphrases or quotes key points from the posts linked. See the last appendix for some notes on nomenclature. If there are any entries I missed (and there will be), please say so in the comments and I’ll add them.
Initially, I created the “Cause Candidates” tag on the EA Forum and tagged all of the listed posts, and made available a Google Sheet. These are not maintained, but might be upon request.
Animal Welfare and Suffering
Pointer: This cause has its various EA Forum tags (farmed animal welfare, wild animal welfare, meat alternatives), where more cause candidates can be found. Brian Tomasik et al.’s Essays on Reducing Suffering are also a gift that keeps on giving for this and other cause areas.
1.Wild Animal Suffering Caused by Fires
Related categories: Politics: System change, targeted change, policy reform.
Wild animal suffering caused by fires and ways to prevent it: a noncontroversial intervention (@Animal_Ethics)
An Animal Ethics grantee designed a protocol aimed at helping animals during and after fires. The protocol contains specific suggestions, but the path to turning these into policy is unclear.
2. Invertebrate Welfare
Invertebrate Welfare Cause Profile (@Jason Schukraft)
The scale of direct human impact on invertebrates (@abrahamrowe)
“In this post, we apply the standard importance-neglectedness-tractability framework to invertebrate welfare to determine, as best we can, whether this is a cause area that is worth prioritizing. We conclude that it is.”
Note: See also Brian Tomasik’s Do Bugs Feel Pain.
3. Humane Pesticides
Humane Pesticides as the Most Marginally Effective Cause (@JeffMJordan)
Improving Pest Management for Wild Insect Welfare (@Wild_Animal_Initiative)
The post argues that insects experience consciousness, and that there are a lot of them, so we should give them significant moral weight (comments contain a discussion on this point). The post goes on to recommend subsidization of less-painful pesticides, an idea initially suggested by Brian Tomasik, who “estimates this intervention to cost one dollar per 250,000 less-painful deaths.” The second post goes into much more depth.
4. Diet Change
Is promoting veganism neglected and if so what is the most effective way of promoting it? (@samuel072)
Animal Equality showed that advocating for diet change works. But is it cost-effective? (@Peter_Hurford, @Marcus_A_Davis)
Cost-effectiveness analysis of a program promoting a vegan diet (@nadavb, @sella, @GidonKadosh, @MorHanany)
Measuring Change in Diet for Animal Advocacy (@Jacob_Peacock)
The first post is a stub. The second post looks at a reasonably high-powered study on individual outreach. It concludes that, based on reasonable assumptions, the particular intervention used (showing videos of the daily life of factory-farmed pigs) isn’t competitive with other interventions on humans:
“(...) we now think there is sufficient evidence to establish that individual outreach may work to produce positive change for nonhuman animals. However, evidence in this study points to an estimate of $310 per pig year saved (90% interval: $46 to $1100), which is worse than human-focused interventions even from a species neutral perspective. More analysis would be needed to see how individual outreach compares to other interventions in animal advocacy or in other cause areas.
Given that a person can be reached for ~$2 and that they spare ~1 pig week, that works out to $150 per pig saved (90% interval: $23 to $560) and, again assuming that each pig has a ~6 month lifespan, that works out to $310 per pig year saved (90% interval: $47 to $1100). To put this in context, Against Malaria Foundation can avert a year of human suffering from malaria for $39, this does not look very cost-effective.”
Comments point out that the postulated retention rates may be too high (making the intervention even worse). Lastly, the second post was written in 2018, and more work might have been done in the meantime.
The third post is somewhat more recent (Nov 2020), but it reports results in terms of “portions of meat not consumed” rather than “animal-years spared”. This makes a comparison with previous research not be straightforward, because different animals correspond to different intensity and length of suffering per kilogram of meat produced, and the post does not report how big these portions are or to which animals they belong.
The fourth post explores “current and developing alternatives to self-reporting of dietary data.”
5. Vegan/Vegetarian Recidivism
“But there’s a big problem with vegan/vegetarian advocacy: most people who switch to vegan/vegetarian diets later switch back.”
The post suggests paying more attention to the growth rate of the vegan/vegetarian movement. It also suggests some specific measures, like producing resources which make it easier for vegetarians/vegans to get all the nutrients they need in the absence of animal products.
6. Plant-Based Seafood
Plant-Based Seafood: A Promising Intervention in Food Technology? - Charity Entrepreneurship Approach Report (vicky_cox)
This Charity Entrepreneurship report ultimately concludes that: ”...while fish product creation in Asia is the most promising intervention within food technology in terms of impact on animals, it is not the most promising intervention for Charity Entrepreneurship to focus on.”
Note: Charity Entrepreneurship has produced many more reports. But, as they are not tagged on the EA Forum, they were difficult to incorporate in this analysis, given the search method I was using (see Appendix: Method). They are, however, available on their webpage.
7. Improving plant-based diets
The Case for Rare Chinese Tofus (@George Stiffman)
In order to improve vegan alternatives, this post proposes to create new types of plant-based food crossing rare Chinese tofu with traditional western cooking methods. The author analyzes the idea in detail, and answers possible objections.
8. Moral Circle Expansion
“This blog post makes the case for focusing on quality risks over population risks. More specifically, though also more tentatively, it makes the case for focusing on reducing quality risk through moral circle expansion (MCE), the strategy of impacting the far future through increasing humanity’s concern for sentient beings who currently receive little consideration (i.e. widening our moral circle so it includes them.)”
In particular, the post makes this point by comparing moral circle expansion to AI alignment as a cause area.
9. Analgesics for Farm Animals
Related categories: Politics: System change, targeted change, policy reform.
Analgesics for farm animals (@Monica)
“There is only one FDA approved drug for farm animal pain in the U.S. (and that drug is not approved for any of the painful body modifications that farm animals are subjected to), FDA approval might meaningfully increase the frequency with which these drugs actually used, and addressing this might be a tractable and effective way to improve farm animal welfare [...] Farm animals in the U.S. almost never get pain medication for acutely painful procedures such as castration, tail docking, beak trimming, fin cutting, abdominal surgery, and dehorning. What I was not aware of until this morning is that there is only one FDA approved medication for ANY farm animal analgesic, and that medication is specifically approved only for foot rot in cattle [...] In contrast, the EU, UK, and Canada have much higher standards for food residues in other domains (hormones, antibiotics, etc.) but have nevertheless approved several pain medication for several procedures in species of farm animals. As a result, these drugs are much more commonly used there.”
10. Welfare of Specific Animals
Rethink Priorities has done research on the welfare of specific animals, and possible interventions to improve it. They produced a number of profiles, some of which I include here for illustration purposes, but without any claim to comprehensiveness. Thanks to Saulius for bringing my attention to this point.
Honey Bee Welfare: Managed Honey Bee Welfare: Problems and Potential Interventions (@Jason Schukraft)
Baitfish: Fish used as live bait by recreational fishermen (@saulius)
Fish Stocking: 35-150 billion fish are raised in captivity to be released into the wild every year (@saulius)
Wild-caught Fish: Worse things happen at sea: The welfare of wild-caught fish (Alison Mood, fishcount.org.uk)
Cleaner Fish: Cleaner Fish: A Neglected Issue Within A Neglected Issue (@Martine Klock Fleten). Despite poor evidence, the use of cleaner fish is common practice among salmon farmers to control sea lice, leading to their suffering and death. The post proposes to improve cleaner fish welfare, or to put an end to this practice.
Rodents Fed to Snakes: Rodents farmed for pet snake food (@saulius)
Mice and Rats: [Question] Are mice or rats (as pests) a potential area of animal welfare improvement? (@Louis_Dixon) This post hints that the suffering of mice and rats in cities might be a possible cause area. The answers give several clues about how to tackle the issue.
Insect Farming: Insects raised for food and feed — global scale, practices, and policy (@abrahamrowe)
Snail Farming: Snails used for human consumption: The case of meat and slime (@Daniela R. Waldhorn)
Cochineals: Global cochineal production: scale, welfare concerns, and potential interventions (@abrahamrowe)
Silkworms: Silk production: global scale and animal welfare issues (@abrahamrowe). The post examines if the suffering of silkworms used in silk production could be an area to be prioritized, but concludes that available resources “might be better spent in other areas, such as reducing the painfulness of pesticides, reducing the number of insects farmed for animal feed, and reducing the harms of cochineal farming”.
Owned cats outdoors in Canada: Would a reduction in the number of owned cats outdoors in Canada and the US increase animal welfare? (@kcudding)
Chickens: [Question] New EA cause area: Breeding really dumb chickens (@Sam Enright). This post poses some questions about the idea of “breeding chickens (and other farm animals) to be less intelligent as a way to reduce the suffering caused by factory farming”.
Baboons: Urban wildlife in South Africa—Cape baboons (@ajmfisher). “The aim of this post is to catalogue existing methods for managing the population of Cape chacma baboons living in the Cape peninsula, with a focus on welfare impacts for the baboons.”
11. Cell-Based Meat R&D
Based on a Fermi estimate, the author concludes that “cell-based meat research and development is roughly 10 times more cost-effective than top recommended effective altruist animal charities.”
12. Animal-Free Proteins
“The report describes what needs to happen to get to 11%, and further to 22% of meat, seafood, eggs and dairy eaten globally every day. Current technology must be refined and scaled, and in some areas, step changes are needed. For instance, optimized protein crops for human consumption need to be bred, and microorganisms as well as animal cells grown on low-cost feedstocks. Regulatory support, such as carbon taxes on meat or subsidies for farmers who are shifting from animal agriculture to alternative proteins, could further boost growth.”
13. Antibiotic Resistance in Farmed Animals
Antibiotic resistance: Should animal advocates intervene? (@Bella_Forristal)
“Reducing antibiotic use in farms is very likely to be net positive for humans. However, it is not clear whether it would be net positive for animals. If farmers stop using antibiotics, animals might suffer from more disease and worse welfare. This effect might be mitigated by the fact that (i) farmers can replace antibiotics with substitutes such as probiotics, prebiotics, and essential oils, which also prevent disease, and (ii) farmers might be motivated to make adaptations to farming practices which prevent disease and also benefit animal welfare, such as lowering stocking density, reducing stress, and monitoring disease more closely. It is not obvious how likely it is that farmers will take these disease-mitigating measures, but since high disease rates increase mortality, decrease carcass profitability, and could cause reputational damage, it is plausible that they will be motivated to do so. Alternatively, animal advocates could take the ‘holistic strategy’ of promoting welfare measures which also tend to cause reduced antibiotic use. Tentatively, I take the view that eliminating antibiotic use on a farm would not lead to worse lives for those animals.
Eliminating antibiotics might also be expensive for producers, and because of this, it could increase the price of animal products in the short term, which would be good for animals. The literature weakly supports the view that meat prices will increase following an antibiotic ban. However, there is also some support for the view that price will increase differentially for smaller and larger animals, which lands us with the small animal replacement problem. This problem could be avoided by the approach taken to the intervention, e.g. a corporate campaign targeting only small animals.”
14. Helping Wild Animals Through Vaccination
Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? (@Animal_Ethics)
“We will first see some cases of successful vaccination programs in the past, including vaccination against rabies, anthrax, rinderpest, brucellosis, and sylvatic plague, in addition to the proposal to vaccinate great apes against Ebola. Next, we will see how zoonotic epidemics have been the object of growing attention.We will then see some responses to them that are misguided and harmful to animals. We will then see the prospects for eventual wild animal vaccination programs against coronaviruses like SARS-CoV-2. We will see the three main limitations of such hypothetical programs. These are the lack of an effective vaccine, the lack of funding to implement the vaccination program, and the lack of an effective system to administer the vaccine. We’ll consider the extent to which these limitations could be overcome and what clues previous examples of vaccination can provide. As we will see, such programs remain to date merely speculative. They could be feasible at some point as other wild animal vaccination programs show. However, it remains uncertain whether there will be human interest in implementing them, despite the benefits for animals themselves.
Finally, we will see the reasons why, if implemented, programs of this kind could substantially help not just the vaccinated animals, but many others as well. Not only would this prevent zoonotic disease transmission to other animals, but such measures could also help inform other efforts to vaccinate animals living in the wild. Moreover, each successful vaccination program helps to illustrate that helping animals in the wild is not impractical, but realistic. This helps to raise concern for these animals and to inspire action on their behalf.”
15. Herbivorizing Predators
Should we herbivorize predators? (@Stijn)
This post puts forth a moral argument with the intention to open discussion about considering herbivorizing predators as a cause area. The author argues that we should start doing scientific research to look for new technologies that would make it possible.
Community Building
1.Effective Animal Advocacy Movement Building
Related categories: Animal Welfare and Suffering
The post argues that EAA-specific movement-building might be particularly neglected within EA.
2. Non-Western EA
Neglected EA Regions (@DavidNash)
The post asks about expanding EA beyond the USA and Europe. It gets some pushback in the comments, particularly because of the difficulty of transmitting ideas with high fidelity.
3. Understanding and/or reducing value drift.
Pointer: This cause has its own EA Forum tag.
A Qualitative Analysis of Value Drift in EA (@MarisaJurczyk)
4. High School Outreach
EA outreach to high school competitors (@Nikola)
“Specifically targeting STEM, logic, debate, and philosophy competitors with short outreach could increase high school outreach effectiveness as it would select for high-performing students who are more likely to engage with EA ideas. This would give these individuals more time to think about career choice and enable them to start building flexible career capital early and might make them more open to engaging with EA in the future.”
5. Idea Inoculation
Effective outreach: evaluating “Idea innoculation” (@rsturrock)
This post proposes an experiment to serve as the basis of a psychology paper about EA ‘idea inoculation’, in order to discover better ways of conveying EA-related information.
6. Values Spreading
Values Spreading is Often More Important than Extinction Risk (Brian Tomasik)
On Values Spreading (@MichaelDickens)
Against moral advocacy (Paul Christiano)
Effective Altruism and Free Riding (@sbehmer)
High-Leverage Values Spreading (@MichaelDickens)
Promoting Simple Altruism (@LiaH)
Values spreading refers to improving other people’s values. The idea has met with some skepticisim, but perhaps variants of it, like highly targetted or high-leverage values spreading, could still be promising.
Transhumanism
Related categories: Global Health and Development, States of Consciousness
1.Cryonics
Cryonics will probably get cheaper if more people sign up. It might also divert money from wealthy people who would otherwise spend it on more selfish things. Further, cryonics might help people take long-term risks more seriously.
“One advantage of life extension is that it might prompt people to think in a more long-term-focused way, which might be nice for solving coordination problems and x-risks.”
One could also argue “that cryonics doesn’t create many additional QALYs because by revival time we’ve probably hit Malthusian limits. So any revived cryonics patients would be traded off against other future lives.”
The author argues that brain preservation is “one of the best areas for people interested in helping others to work in” and “a great place for people who are interested in helping others to donate money”.
2. Ageing
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
How to evaluate neglectedness and tractability of ageing research (@Emanuele_Ascani)
Project Proposal: Gears and Aging (@johnswentworth)
[Draft] Fighting Aging as an Effective Altruism Cause (@turchin)
Cost-Effectiveness of Aging Research (@SarahC)
A general framework for evaluating ageing research (@Emanuele_Ascani)
RP Work Trial Output: How to Prioritize Anti-Aging Prioritization—A Light Investigation (@Linch)
3. Genetic Enhancement
Genetic Enhancement as a Cause Area (@Galton)
The post makes the argument both from a short and long-term perspective. I was particularly intrigued by the suggestion to select for empathy; the comments also suggest selecting against malevolent traits.
4. Mind Enhancement
“This post aims to raise awareness, provide a rough framework for classification and list the most important theoretical arguments and considerations regarding the impact/desirability of mind enhancement.”
Cause profile: Cognitive Enhancement Research (@George Altman)
“This post is a first attempt at analysing cognitive enhancement research using the ITN framework and cost-effectiveness estimates. Several interventions enhance cognitive functions such as intelligence and decision making. If we identify effective, cheap and scalable cognitive enhancement interventions, they may be competitive with GiveWell charities.”
5. Finding Extraterrestrial Life
Cosmic EA: How Cost Effective Is Informing ET? (@TruePath)
Politics
Politics: Ideological Politics
1. Local Political Causes
Should local EA groups support political causes? (@lukasberglund)
Recommendations for prioritizing political engagement in the 2020 US elections (@IanDavidMoss)
New Top EA Cause: Politics (@Davidmanheim). Note: Satirical.
Georgia on my Mind: Effectively Flipping the Senate (@deluks917)
What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? (@Evan_Gaensbauer)
What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? (@DonyChristie)
Why are party politics not an EA priority? (@Chantal)
2. Fighting Harmful Ideologies
Ineffective Altruism: Are there ideologies which generally cause their adherents to have worse impacts? (@Nathan Young)
Note: Post is a stub.
Politics: Global politics
Pointer: See also the EA Forum tag for Global Governance.
1. Democracy Promotion
Democracy Promotion as an EA Cause Area (@bryanschonfeld)
The author estimates the benefits of democracy. They then suggest concrete actions to take: “a review essay on the efficacy of tools of external democracy promotion finds that non-coercive tools like foreign aid that is conditioned on democratic reforms and election monitoring are effective, while coercive tools like sanctions and military intervention are ineffective… One tool EA organizations can fund is election monitoring. Research suggests that election monitoring can play a causal role in decreasing fraud and manipulation.” Some forum comments suggest that the area is too costly, and not that neglected.
Decreasing populism and improving democracy, evidence-based policy, and rationality (@Hauke Hillebrandt)
This post explores several funding opportunities in this area. The author lists the following causes:
Increasing rationality: Rationality could be increased by funding books as Galef’s The Scout Mindset, or projects as the Winton Centre for Risk and Evidence.
General education spending: “[E]ducation can often predict populism better than income. Thus one funding idea might be to try to increase education budgets globally.”
Civic education: “[C]ivic education can strengthen democratic beliefs and explain the relevance of pluralism, which can play an important role in preventing populist attitudes.”
Journalism: Funding ideas include payments for online news content, the provision of local and investigative journalism, and investigative fact-check websites in general.
Information spreading: “One way to reduce populism is to give activists the tools to expose and debunk populist ‘common sense’ arguments”, tools like Our World in Data.
Research on populism: “Funding opportunity: Fund an academic researcher working on populism.”
English language education: “Fostering English language learning improves access to more content. This might improve international relations.” But the author notes that learning English doesn’t seem neglected.
Elections and voting: Funding opportunities include switching back to paper ballots to increase trust (Verified Voting Foundation), voting system reform, and the uses of statistical techniques to test election results, among others.
Combatting computational propaganda: Several funding opportunities and ideas are proposed here to counterbalance current AI techniques spreading misleading information.
Fostering more independent commissions and monitoring: Following the example of the Independent Commission for Aid Impact, which scrutinises UK aid spending, institutional decision making could be improved by setting up independent commissions for every major department in government.
Aggregating expert consensus: “Aggregating expert consensus might decrease populism by reducing the faith put in common sense approaches.”
Prediction markets: “Furthering the use of prediction market might help increase the accuracy of forecasts for important policy issues.”
Doing more fundamental research: Funding could foster fundamental research, which is useful to find new techniques to improve institutional decision making.
2. Promotion of Parliamentarism
The effective altruist case for parliamentarism (@Tiago Santos)
The author applies the ITN framework to promotion of parliamentarism and concludes that it is a valuable cause for EAs to focus on.
3. Promotion of Self-Determination
This post discusses the advantages of promoting more recognition of a right to self-determination. It develops a set of criteria to be met and applies them to the particular cases of Artsakh, Taiwan, and Crimea. Finally it shows some possible ways to reinforce the idea of self-determination.
4. Human Rights in North Korea
Cause Area: Human Rights in North Korea (@Denis Drescher)
The scale of suffering seems vast, and marginal interventions (e.g., smuggling North Koreans out of China) might be cost-effective. The post also suggests capacity building in this area might be a promising intervention.
5. Improving Local Governance in Fragile States
Politics: System Change, Targeted Change, and Policy Reform.
Note: These categories are grouped together because in practice the distinction between broad system change from outside a political system and targeted change or policy reform within a system is often not quite clear.
Pointer: This cause candidate has a related EA Forum tag: Policy Change.
1. Better Political Systems and Policy-Making
Pointer: The related Institutional Decision-Making has its own EA Forum tag; more cause candidates can be found there.
Cause: Better political systems and policy making (@weeatquince)
Some personal thoughts on EA and systemic change (@Carl_Shulman)
Deliberation May Improve Decision-Making (@Neil_Dullaghan)
Answer to “Short List of Cause Areas?” (@Jack Cunningham)
2. Getting Money Out of Politics and Into Charity
Getting money out of politics and into charity (@UnexpectedValues)
Donors from two opposing parties could be matched to send their money to their favourite charities instead than to zero-sum political contests.
3. Vote Pairing
The post makes the case that vote pairing —where one or more voters for a mainstream candidate in a safe US state vote for a third-party candidate in exchange for a vote from a third candidate supporter in a contested state— is much more effective than other traditional interventions.
4. Electoral Reform
Pointer: This cause has its own EA Forum tag. I’m adding one post for illustration purposes:
Why You Should Invest In Upgrading Democracy And Give To The Center For Election Science (@aaronhamlin)
Note: Included here for completeness. This isn’t, strictly speaking, a new cause area because the Center For Election Science is working on it.
5. Tax Justice
Tax Havens and the case for Tax Justice (@--alex--)
The post gives an overview of current efforts to make tax evasion or tax flight harder, and why this should be thought of as positive. A commenter, Larks, makes the opposite case.
6. Effective Informational Lobbying
Informational Lobbying: Theory and Effectiveness (@Matt_Lerner)
Effective Lobbying Discussion Group (@Noah Wescombe)
The first post starts with a literature review and concludes by proposing “something along the lines of ‘effective lobbying’: a rigorous approach to institutional-level change, starting with the legislature, that would take a portfolio approach to policy advocacy,” and outlines how that would broadly look.
The second post is a “call to all interested in lobbying as both a career and an EA methods topic.” Having a discussion group on this topic seems like a great idea, so I gave the post a strong upvote. However, it seems like it didn’t get picked up when it was posted in mid-December 2020.
7. Ballot Initiatives
Intervention Profile: Ballot Initiatives (@Jason Schukraft)
“The goal of this post is to bring ballot initiatives to the collective attention of the EA community to help promote future research into the effectiveness of ballot initiative campaigns for EA-aligned policies and movement-building.” The post gives examples of what might be accomplished with ballot initiatives and covers their advantages and disadvantages.
8. Increasing Development Aid
Related categories: Global Health and Development.
Funding Proposal: Supporting a Campaign to Increase Canadian Official Development Assistance (@jonathancourtney)
EAF’s ballot initiative doubled Zurich’s development aid (@Jonas Vollmer)
£4bn for the global poor: the UK’s 0.7% (@Sanjay)
9. Institutions for Future Generations
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
Institutions for Future Generations (@tylermjohn)
10. Decline or Collapse of the US.
EA and the Possible Decline of the US: Very Rough Thoughts (@Cullen_OKeefe)
There are reasons to believe that the probability of regime collapse of the US in the next 50 years is higher than 0.5%. The dis-utility from Collapse could be extreme in certain scenarios.
Politics: Armed Conflict
This cause has two related EA Forum tags: Armed conflict and Nuclear Weapons which may contain more cause candidates.
1. Preventing or Reducing The Severity of Nuclear War
Which nuclear wars should worry us most? (@Luisa_Rodriguez)
Note: Luisa Rodríguez has more content in this cause.
2. Ukraine Conflict
Ukraine giving—short term high leverage (@Timothy_Liptrot)
This post proposes supporting Ukraine as a cause area, arguing that if Russia is not strongly punished, other states could follow similar policies. The author goes on to propose buying satellites for improving Ukraine’s military system.
Global Health and Development
Pointer: This cause candidate has its own EA Forum tag.
1. Reducing the Efficiency of Genocides
Related categories: Politics
The post makes the case that at least some genocides (the Rwandan, Myanmar, and possibly the Somalian genocides) could have been stopped with better oversight and targeted use of resources.
2. Malnutrition
The author asks about the impact of malnutrition, i.e., “eating the wrong things as a voluntary choice despite having alternatives.” This would mostly be a problem for middle and high-income countries.
3. Diet Change
Dietary habits – Another potential Cause Area? (@peter_janicki)
Unhealthy food choices result in poor diets that reduce expected lifespan and life quality. The author of this post collects a considerable amount of evidence and argues that this a neglected area, given the number of people affected by those choices.
4. Raising IQ
Related categories: Transhumanism
Consider raising IQ to do good (@Lila_Rieber)
“Interventions to raise IQ could do a lot of good because of potentially significant flow-through effects of intelligence. IQ also has the benefit of being easily quantifiable, which would make it simpler to compare interventions.”
Note: In practice, the raising-IQ framing is unpalatable for some people, as are some charities in an adjacent space, like Project Prevention. However, because one of the most effective ways of raising IQ is reducing malnourishment or undernourishment, and in particular, iodine deficiency, one could focus on these causes instead. Note that mal/undernourishment in kids leads to lower wages in adulthood. Although one might suspect IQ is the mediating factor, it’s not necessary to emphasize the connection.
5. Physical Goods
The EA movement is neglecting physical goods (@ruthgrace)
“Seven out of eight of the Givewell top charities deal with physical goods—anti-malaria nets, deworming medication, and vitamins. But otherwise, there’s not much discussion/active work in EA on how to improve/spin up the physical manufacture and distribution of physical goods beyond donating money to existing organisations.”
6. Fighting Diarrhoea
Diarrhoea seems like a large problem because more people die of it per year (or did so in 2015). The remedy is apparently “oral rehydration therapy: a large pinch of salt and a fistful of sugar dissolved in a jug of clean water.”
Note: GiveWell has moved slowly and cautiously on this topic, but Evidence Action’s Dispensers for Safe Water program is now a GiveWell Standout charity.
7. Fighting Fistulae
[Question] Can it be more cost-effective to prevent than to treat obstetric fistulas? (@brb243)
The author suggests a way of preventing fistulas which may be much cheaper than surgery: “targeting midwives to share information on when to seek specialized care and identify at-risk patients, training doctors at government (free of charge) clinics, providing equipment, and potentially offering travel stipend to extremely poor households”.
8. International Supply Chain Accountability
Related categories: Politics: System change, targeted change, policy reform.
Workers’ organizations can lobby international companies to adopt better labour conditions across their supply chain, and to get the original companies to pay for these efforts. A particularly promising strategy is to apply pressure in the countries these companies originate from (Spain, Germany, the US), rather than in the countries where the products are made. This seems to be working for the case of Inditex (Zara, and various other textile brands). It is unclear how, and if, EA might get organizations working in this area to accept external funds, but they could in principle absorb a lot of them.
Note: I’m the author of this post.
9. Chloramphenicol for Heart Attacks
Chloramphenicol as intervention in heart attacks (@G Gordon Worley III)
The article linked suggests approving Chloramphenicol as a coronary treatment, which is claimed to be a fixed cost of “$25 million spent once to save 400,000 lives per year in the U.S. alone.” Comments point out that the estimate “seems to be based on one study of 21 pigs.”
10. COVID-19
Pointer: This cause candidate has its own EA Forum tag, which contains more cause candidates. Here are some examples included for illustration purposes:
Is rapid diagnostic testing (RDT), such as for coronavirus, a neglected area in Global Health? (@Ramiro)
Customized COVID-19 risk analysis as a high value area (@Askell)
Responding to COVID-19 in India (@Suvita)
Coronavirus Research Ideas for EAs (@Peter_Hurford)
Coronavirus and long term policy [UK focus] (@weeatquince)
11. Vaccines
EA Should Spend Its “Funding Overhang” on Curing Infectious Diseases (@joshcmorrison)
The author argues that “funding overhang” should be spent developing vaccines against infectious diseases:
“If EA’s investing $10 billion in vaccination over the next ten years could save the equivalent of 3-5 years of disease burden of a disease like tuberculosis, it would represent a cost per disability-adjusted-life-year (DALY) saved of roughly $50-$85 (on par with GiveWell top charities).”
12. Clean Cookstoves
This is a very quick, rough model of the cost-effectiveness of promoting clean cookstoves in the developing world. It suggests that:
“If a clean cookstove intervention is successful, it may have roughly the same ballpark of cost-effectiveness as a GiveWell-recommended charity.
Circa 90% of the impact comes from directly saving lives, based on a model which estimated both the number of lives saved and the impact on climate change.”
13. Agricultural R&D
Agricultural research and development (@David_Goll)
“In combination, the difficulties with estimating the effects of R&D and the potential barriers to adoption suggest that the estimated benefit-cost ratios reported earlier are likely to be upwardly biased. The benefit-cost ratios estimated are also lower than those associated with Giving What We Can’s currently recommended charities. For instance, the $304 per QALY estimate based on the Copenhagen Consensus benefit-cost ratio, which appears to be at the higher end of the literature, compares unfavourably to GiveWell’s baseline estimate of $45 to $115 per DALY for insecticide treated bednets (GiveWell, 2013). The benefit-cost ratios also appear to be lower than those associated with micronutrient supplements, as discussed earlier. While there are significant benefits that remain unquantified within agricultural R&D, the same is also true for interventions based on bednet distribution, deworming and micronutrient supplements. As a result, while this area could yield individual high impact opportunities, the literature as it stands does not seem to support the claim that agricultural R&D is likely to be more effective than the best other interventions.”
14. Golden Rice
Should GMOs (e.g. golden rice) be a cause area? (@mariushobbhahn)
“In this post, I want to very roughly evaluate whether golden rice should be of interest to EAs and whether genetically modified organisms (GMOs) in general are worth investigating deeper.”
The author concludes that this cause is valuable, though acknowledges that golden rice wouldn’t allow to reach levels of efficiency comparable to GiveWell top charities.
15. Agricultural Land Redistribution
Intervention report: Agricultural land redistribution (@David Rhys Bernard & @Jason Schukraft)
The authors conclude that advocating for agricultural land redistribution is neither tractable nor cost-effective.
16. Ventilation
Cost-Effectiveness of Air Purifiers against Pollution (@Lukas Trötzmüller)
“The goal for this post is to give an introduction into the human health effects of air pollution, encourage further discussion, and evaluate an intervention: The use of air purifiers in homes. These air purifiers are inexpensive, standalone devices not requiring any special installation procedure. A first analysis suggests that the cost-effectiveness of this intervention is two orders of magnitude worse than the best EA interventions. However, it is still good enough to qualify as an ’effective’ or even ‘highly effective’ health intervention according to WHO criteria.”
How a ventilation revolution could help mitigate the impacts of air pollution and airborne pathogens (@Mike Cassidy)
Indoor air pollution can be worse than outdoor pollution, yet it is neglected. Installing ventilation and filtration systems in our buildings would reduce economic losses arising from air pollution and respiratory viruses.
17. Stubble Burning in India
Stubble Burning in India (@Jason Schukraft)
Stubble burning in north India is a major contributor to seasonal decreases in ambient air quality [...] Stubble burning releases carbon dioxide, carbon monoxide, nitrogen oxides, sulfur oxides, and methane as well as particulate matters (PM10and PM2.5) (Abdurrahman, Chaki, & Saini 2020). These pollutants affect the immediate area and also drift southeast to Delhi, smothering the city of ~22 million in thick haze. At their peak, these fires are responsible for ~58% of Delhi air pollution (Beig et al. 2020). The consequences of this air pollution include skin and eye irritation, respiratory problems (dry cough, wheezing, breathlessness, chest discomfort, asthma), and hypertension (Rizwan, Nongkynrih, & Gupta 2013). Air pollution is estimated to be responsible for at least 48,000 premature deaths in Delhi alone in 2020 (Greenpeace, n.d.).[5] Nationwide, the open burning of agricultural residue is estimated to be responsible for more than 66,000 premature deaths in India (GBD MAPS Working Group 2018).[6]
18. Starvation in Afghanistan
[Linkpost] Millions face starvation in Afghanistan (@aogara)
Since the Taliban seized power, US sanctions and the sudden suspension of foreign aid worsened the situation in Afghanistan to the extent that millions of people are at risk of starvation and death.
19. Water, Sanitation and Hygiene Interventions
According to GiveWell’s research, the overwhelming majority of the value of mass deworming interventions (DW) comes from expected long-term economic effects rather than short-term effects on health. The mechanism by which these long-term effects occur is unclear, especially as the health effects are so small.
The best WASH interventions (in particular, Dispensers for Safe Water, but possibly also Development Media International) have larger health effects than DW.
If the long-term effects of deworming are related to the health effects of worms, it is likely that the long-term economic effects of WASH interventions are at least as good. If the effects are somehow specific to worms, given that a significant part of the benefit of WASH is in preventing parasitic worm infestation, there should still be significant long-term effects of WASH interventions.
20. Research on Inbreeding
Inbreeding and global health & development (@pafnuty)
“Inbreeding (also known as consanguinity) is associated with an increased risk of adverse prenatal outcomes including stillbirths, low birth weight, preterm delivery, abortion, infant and child mortality, congenital birth defects, cognitive impairments, malformations and many other complex disorders. . . . [R]esearch on this issue in the context of global health and development is scarce, and additional research might generate ample information value about potentially impactful interventions.”
21. Stopping Miscarriages
Might stopping miscarriages be a very important cause area? (@SaraAzubuike)
The author implies that stopping miscarriages could be important (if there’s some probability that embryos are human), given that miscarriages occur in 20% of pregnancies.
22. Advocacy for Legalizing Abortion
[Question] Developing countries and adolescent pregnancy: how effective could advocacy for legalizing abortion be? (@Ramiro)
Adolescent pregnancy is associated with high rates of child mortality. The author advances that advocacy for legalizing abortion may be an effective way to prevent this tragedy in developing countries.
23. Drug Legalisation
Ending The War on Drugs—A New Cause For Effective Altruists? (@MichaelPlant)
Moving from drug prohibition to legalisation would be beneficial to drug users (decriminalisation) and drug-producing and trafficking countries (less violence). The author raises the question of whether this could be an area to be prioritized.
24. Patent Policy
This post covers three candidates within patent policy: The first is global health innovation incentives:
“Alternative innovation finance mechanisms—such as advanced market commitments and the Health Impact Fund—can help incentivize firms to invest in R&D aimed at helping developing countries’ poorest people. The present patent system, on the other hand, provides limited incentive to create innovations for these people.”
The second candidate is patent trolling: Some firms’ merely buy patents and sue others for infringing on their rights, without really producing anything themselves. These patent trolls cost money to other firms, which therefore turn hesitant to use technology, and unwilling to innovate.
“However, many legislative and judicial steps have been taken since 2013 to address patent trolling in the US, making the issue—in our view—presently low in scale, neglectedness, and tractability.”
The third is evergreening, which doesn’t seem to be a high priority either:
“It does not appear that companies unfairly extend (i.e., evergreen) their patent terms using statutory strategies. However, there is reason to believe companies use other means such as the 30-month stay provision to extend effective market monopolies.”
The author also recommends further research into this area.
25. Training Economists
Hits-based development: funding developing-country economists (@Michael_Wiebe)
“One specific mechanism [for promoting growth] is to train developing-country economists, who then work in government and influence policy in a pro-growth direction, ultimately increasing the probability of a growth episode.”
26. Improving Welfare Algorithms
[Link] Improving the lives of millions of Latin Americans through better welfare targeting algorithms (@NORIEGA)
“More than 50 million people in Latin America are impacted by the decision of very simple linear algorithms which determine how much welfare they receive from social programs. Simple changes to the algorithm lead to hundreds of thousands of people being added or removed to major welfare programs.”
The author holds that improving these algorithms would allocate billions of dollars more effectively.
27. Low Back Pain
Preventing low back pain with exercise (@Ryan Kidd)
This post argues that exercise seems to be the most effective treatment to prevent low back pain, which is a symptom experienced by people of all ages and socioeconomic circumstances. A comment by Aaron suggests that this might be a solid cause area in the developing world.
28. Chronic Pain
Should Chronic Pain be a cause area? (@mariushobbhahn)
This post gives an overview of what chronic pain is, and its relation with demographic and environmental factors. It finally discusses whether it could be a worthy cause area, reaching no conclusion.
29. Intactivism
Intactivism as a potential Effective Altruist cause area? (@Question Mark)
The author argues against the practice of circumcision and proposes its abolition.
30. Delaying Aging
This post argues that aging is the most common cause of death and human suffering. There are already known effective interventions (exercise, fighting smoking) against accelerated aging. For this reason, research confirming that aging itself is responsible for aging-related diseases could yield many cost-effective programs.
31. Health in Younger Generations
The health of millennials (@Michael_2358)
Inspired by a study on the health of the millennials, the author advances that mental and physical health might be deteriorating in younger generations. This idea is only proposed as a subject for future research.
32. Charter Cities
Intervention Report: Charter Cities (@David Rhys Bernard & @Jason Schukraft)
A comprehensive report on the subject. Conclusions are pessimistic, given the uncertainties involved, but the authors state that further research could be valuable at a modest cost.
This is a defense of charter cities and a reply to the foregoing report.
33. Alleviating Price Risk
This post is an excerpt from this piece by Peter Harrigan. It calls attention to the idea that price risk is one of the more overlooked sources of poverty. Third world farmers face constant risk from price volatility, which reduces their profits and leads them to bankruptcy.
34. Fighting Corruption
Fighting corruption in aid-embezzling (@MarcSerna)
This post estimates that aid-embezzling in developing countries could cut “from 10% to 50% of donations received by charitable organizations in humanitarian and development settings”. It proposes the creation of an organization dedicated to do audits of development and humanitarian projects in such countries.
35. Lead Exposure
Global lead exposure report (@David Rhys Bernard & @Jason Schukraft)
This is a comprehensive report on the problem of lead exposure. The authors conclude that it is neglected and deserves more attention among effective altruists leaning towards neartermist interventions.
36. Fungal Diseases
Antifungal Resistance—The Neglected Cousin of Antibiotic Resistance (@Madhav Malhotra)
This post links to an interview with Marcio Rodrigues, an expert in the field, about the importance and neglectedness of fungal diseases.
Global Health and Development: Mental Health
Related categories: States of consciousness.
Pointer: This cause candidate has two related EA Forum tags: Mental Health (Cause Area) and Subjective Well-Being. For illustration purposes:
Cause profile: mental health (@MichaelPlant)
“Not only does mental illness seem to cause as much, if not more, total worldwide unhappiness than global poverty, it also seems far more neglected. Effective mental health interventions exist currently. These have been improving over time and we can expect further improvements. I estimate the cost-effectiveness of a particular mental health organisation, StrongMinds, and claim it is (at least)four times more effective per dollar than GiveDirectly, a GiveWell recommended top charity. This assumes we understand cost-effectiveness in terms of happiness, as measured by self-reported life satisfaction [...] Even if mental health is a large-scale, neglected problem, we shouldn’t consider it a possible moral priority if there aren’t effective treatments. Fortunately, there are.”
HLI’s Mental Health Programme Evaluation Project—Update on the First Round of Evaluation (@Jasper Synowski)
The project, which seems to be ongoing, tries to systematically assess a long list of mental health interventions.
Initially, various EAs proposed varied experimental mental health interventions. There are a number of posts asking if “mental health issue X” should fall within Effective Altruism’s purview. Of these, mental health apps represent probably the most well-argued intervention and stand on a class of their own. In particular, they are scalable.
“Fixing Adolescence” as a Cause Area? (@kirchner.jan)
Adolescence comes frequently with substantial suffering. After analyzing abundant data, and pointing out the lack of strategies to tackle this problem, the author suggests that more research is desirable to establish this as a cause area.
[Link] Preprint is out! 100,000 lumens to treat seasonal affective disorder (@Fabienne)
“Seasonal affective disorder (SAD) is common and debilitating. The standard of care includes light therapy provided by a light box; however, this treatment is restrictive and only moderately effective. Advances in LED technology enable lighting solutions that emit vastly more light than traditional light boxes. Here, we assess the feasibility of BROAD (Bright, whole-ROom, All-Day) light therapy and get a first estimate for its potential effectiveness.”
This post argues that instead of trying to discourage sex workers, there is a number of reasons for which “it is worth considering integrating this profession more into society”:
“Sex workers satisfy a very essential need, providing not only sexual intercourse but also company, a listening ear, a safe space where is no judgment. Otherwise dangerous paraphilias can be safely practiced, believed to be shameful wants can be satisfied, never said fantasies can be discussed. Victims of sexual abuse, people with mental health conditions, couples with sexual problems can not only talk or discuss their problems, as it would be possible in a clinical setting but can also receive practical help too. All of these attributes make sex work a potentially valuable addition to mental health and wellbeing services.”
Mental health apps: Mind Ease: a promising new mental health intervention (@PeterBrietbart)
See also: Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was (@MichaelPlant)
Preventing/Curing Trauma: Is trauma a potential EA cause area? (@nonzerosum)
Preventing Child Abuse: Is preventing child abuse a plausible Cause X? (@Milan_Griffes)
Anti-tribalism: Anti-tribalism and positive mental health as high-value cause areas (@Kaj_Sotala)
Insomnia: Insomnia: a promising cure (@Halstead)
Sleep loss: Should we consider the sleep loss epidemic an urgent global issue? (@orenmn)
Mindfulness Based Stress Reduction: Cost Effectiveness of Mindfulness Based Stress Reduction (@Elizabeth)
States of Consciousness.
1. Psychedelics
Related categories: Global Health and Development: Mental Health.
The post makes the case from an EA perspective and offers a cash prize for counter-arguments.
2. Fundamental Consciousness Research
Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk (@MikeJohnson)
″...if your goal is to reduce suffering, it’s important to know what suffering is.”
3. Increasing Access to Pain Relief (Opioids) in Developing Countries
Related categories: Global Health and Development. Politics: System change, targeted change, policy reform.
Access to opioids is unduly restricted, such that the pain of some deaths can amount to “torture by omission”. The author suggests, as a tentative donation target, the Pain and Policy Studies Group of the University of Wisconsin-Madison which “runs ‘International Pain Policy Fellowships’, which train national champions of the cause to identify and overcome barriers to the use of opioids in their countries. The programme has had numerous in-country successes.” However, the program seems to now be defunct. One organization that I personally perceive as promising, which is working in this space, is The Organisation for the Prevention of Intense Suffering.
4. Cluster Headaches
“Cluster headaches are considered one of the most excruciating conditions known to medicine...”; “there is [...] evidence that psilocybin mushrooms can prevent and abort entire episodes. Such evidence has been published as survey data and is also widely reported by patients in cluster headache groups. TwoPhase I RCTs are ongoing and should add to the existing evidence for efficacy. Lack of access to psilocybin mushrooms and widespread information about using them are key barriers to effective treatment for many patients.”
5. Drug Policy Reform
Related categories: Politics: System change, targeted change, policy reform.
High Time For Drug Policy Reform (@MichaelPlant)
“In the last 4 months, I’ve come to believe drug policy reform, changing the laws on currently illegal psychoactive substances, may offer a substantial, if not the most substantial, opportunity to increase the happiness of humans alive today.”
6. Love
Love seems like a high priority (@kbog)
“Making it possible for people to deliberately fall in love seems like a high priority, competitive with good short- and medium-term causes such as malaria prevention and anti-aging. However, there is little serious work on it.”
7. Universal Euphoria
Happy animal farm — Universal euphoria (@Michael Dickens)
Hedonium (Less Wrong tag page)
Wireheading (Less Wrong tag page)
“Rats on heroin” thought experiment
Compressing as much happiness into a unit of matter can be pursued at all levels of technological development. With current technology, we could have animal farms dedicated to making rats, about which we know a fair bit, maximally happy. With future technology, we could have computer simulations of maximal bliss.
The idea is sometimes thought to be morally repugnant or philosophically misguided, and a quick Fermi estimate suggests that current happy animal farms would not be cost effective compared to interventions in the developing world.
Space
Pointer: This cause candidate has its own EA Forum tag.
Related categories: Existential risk, Transhumanism, Politics: System change, targeted change, policy reform.
1. Space Colonization
What analysis has been done of space colonization as a cause area? (@reallyeli)
The Case for Space: A Longtermist Alternative to Existential Threat Reduction (@Giga)
If we had a backup planet, existential risk would be reduced. Further, we’d be able to have more people. However, even with a backup planet, existential risk in both planets would be correlated, and the protection from extinction that the second planet provides would be inversely proportional to the degree of correlation. One might expect this correlation to be particularly high for hostile AI. See here for some discussion on these points.
User @kbog looked at this issue in more depth, and concluded that:
In this post I take a serious and critical look at the value of space travel. Overall, I find that the value of space exploration is dubious at this time, and it is not worth supporting as a cause area, though Effective Altruists may still want to pay attention to the issue. I also produce specific recommendations for how space organizations can rebalance their operations to have a better impact.
2. Space Governance
Space governance is important, tractable and neglected (@Tobias_Baumann)
“I argue that space governance has been overlooked as a potentially promising cause area for longtermist effective altruists. While many uncertainties remain, there is a reasonably strong case that such work is important, time-sensitive, tractable and neglected, and should therefore be part of the longtermist EA portfolio [...] The work I have in mind aims to replace the current state of ambiguity with a coherent framework of (long-term) space governance that ensures good outcomes if and when large-scale space colonisation becomes feasible.”
Education
Related categories: Global Health and Development.
1. Global Basic Education
The post could use some work, but I can imagine both of its points being true: education has intrinsic value (all things being equal, we want to have more education), and extrinsic value (it is somewhat correlated with health outcomes, and economic productivity).
2. Philosophy in Schools
“In this post I consider the possibility that the Effective Altruism (EA) movement has overlooked the potential of using pre-university education as a tool to promote positive values and grow the EA movement. Specifically, I focus on evaluating the potential of promoting the teaching of philosophy in schools.”
Climate Change
Related categories: Politics: System change, targeted change, policy reform. Politics: Culture war.
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
1. General
Does climate change deserve more attention within EA? (@Louis_Dixon)
Global development interventions are generally more effective than climate change interventions (@HaukeHillebrandt)
Climate Change Is Neglected By EA (@mchr3k)
Most notably, climate change has a long tail of bad outcomes, and it impacts more than just GDP, as previously modelled.
Note: The disagreement about whether EA should give more attention to climate change is probably older than any of these posts.
2. Public R&D to Deal With Climate Change
Vox article (@Henry_Stanley)
3. Leveraging the Climate Change Movement
“This willingness to act seems to be mostly tied to climate change and cannot be easily directed towards more effective causes. Therefore, I think EAs could influence existing concerns and willingness to act on climate change to direct funds/donations towards cost-effective organizations (i.e., CfRN, CATF)with relatively low investment of time.”
4. Extinguishing or Preventing Coal Seam Fires
“Much greenhouse gas emissions comes from uncontrolled underground coal fires. I can’t find any detailed source on its share of global CO2 emissions; I see estimates for both 0.3% and 3% quoted for coal seam fires just in China, which is perhaps the world’s worst offender. Another rudimentary calculation said 2-3% of global CO2 emissions comes from coal fires. They also seem to have pretty bad local health and economic effects, even compared to coal burning in a power plant (it’s totally unfiltered, though it’s usually diffuse in rural areas). There are some methods available now and on the horizon to try and put the fires out, and some have been practiced—see the Wikipedia article. However, the continued presence of so many of these fires indicates a major problem to be solved with new techniques and/or funding for the use of existing techniques.”
5. Paris-Compliant Offsets
“We should be rapidly exploring higher quality and more durable offsets. If adopted, these principles could be a scalable and high-leverage way of moving organisations towards net-zero.”
6. Help coral reefs survive climate change
7. CO2 Sensors
[Question] Any initiative/zo introduce small and cheap CO2 sensors? (@Martin (Huge) Vlach)
The idea here is to raise awareness about high CO² levels by introducing sensors in smartphones and similar devices.
8. Hurricanes
Seeking a Collaboration to Stop Hurricanes? (@Anthony Repetto)
This post proposes to stop hurricanes as a way to avoid the damages caused by them. As high surface temperatures are a necessary condition for hurricanes, cooling them down would prevent their formation, which might be achieved by regularly provoking water spouts (that is, “30mph ‘humidity tornadoes’ over hot waters”) with the help of a special device described by the author.
Existential and Global Catastrophic Risks
Pointer: This cause has its own EA Forum tag. More cause candidates may be found there, or in the related AI Alignment, AI Governance and Civilizational Collapse & Recovery tags.
1. Corporate Global Catastrophic Risks
Corporate Global Catastrophic Risks (C-GCRs) (@HaukeHillebrandt)
“It might be useful to think of corporations as dangerous optimization demons which will cause GCRs if left unchecked by altruism and philanthropy.”
Comments present a different perspective.
2. Aligning Recommender Systems
Pointer: See the related Near-Term AI Ethics tag.
Aligning Recommender Systems as Cause Area (@IvanVendrov)
“In this post we argue that improving the alignment of recommender systems with user values is one of the best cause areas available to effective altruists, particularly those with computer science or product design skills.”
3. Surveillance
Given the seriousness of surveillance tech, which can stabilise totalitarian regimes and destabilise democratic ones, this post proposes to fund advanced AI-based stylometrics research to see how far it can be developed and to create awareness about whatever are the results of this research.
4. Keeping Calories in the Ocean for a Possible Catastrophe
In particular, the post suggests cultivating bacteria. ALLFED’s director answers in the comments.
Note: Included here for completeness. This isn’t, strictly speaking, a new cause area since ALLFED is now working on it.
5. Recovery from an Existential Catastrophe
This post lists some ideas relating to better recovery of civilization in case that an existential catastrophe takes place, which seems rather neglected as compared to prevention of existential risks.
6. Resilience of Industry and the Electric Grid
7. Foods for Global Catastrophes (ALLFED)
Note: Included here for completeness. This isn’t, strictly speaking, a new cause area since ALLFED is now working on it.
8. Preventing Ideological Engineering and Social Control
Related categories: Politics
Ideological engineering and social control: A neglected topic in AI safety research? (@geoffreymiller)
“Will enhanced government control of populations’ behaviors and ideologies become one of AI’s biggest medium-term safety risks?”
9. Reducing Long-Term Risks from Malevolent Actors
Reducing long-term risks from malevolent actors (@David_Althaus)
The authors make the case that a situation when malevolent actors rise to power has many negative externalities. They propose countermeasures, such as advancing the science of malevolence. This would involve developing better constructs and measures of malevolence, and hard-to-beat detection measures, such as neuroimaging techniques. Comments suggest further concrete measures, such as having elections for parties rather than leaders (which gives less power to individuals).
10. Autonomous Weapons
Pointer: This cause candidate has its own EA Forum tag
Why those who care about catastrophic and existential risk should care about autonomous weapons (@aaguirre)
On AI Weapons (@kbog)
11. AI Governance
Pointer: This cause candidate has its own EA Forum tag, and is already being worked on at FHI’s Centre for the Governance of AI, among other places. For illustration purposes:
AI Governance: Opportunity and Theory of Impact (@Allan Dafoe)
12. International Cooperation
The author argues that international cooperation could especially reduce the risks of unaligned AI and engineered pandemics. That’s why allocating funding in attempts to foster it seems of high importance.
13. Improving Disaster Shelters to Increase the Chances of Recovery From a Global Catastrophe
Pointer: This cause candidate has its own EA Forum tag.
Improving disaster shelters to increase the chances of recovery from a global catastrophe (@Nick_Beckstead)
“What is the problem? Civilization might not recover from some possible global catastrophes. Conceivably, people with access to disaster shelters or other refuges may be more likely to survive and help civilization recover. However, existing disaster shelters (sometimes built to ensure continuity of government operations and sometimes built to protect individuals), people working on submarines, largely uncontacted peoples, and people living in very remote locations may serve this function to some extent.
What are the possible interventions? Other interventions may also increase the chances that humanity would recover from a global catastrophe, but this review focuses on disaster shelters. Proposed methods of improving disaster shelter networks include stocking shelters with appropriately trained people and resources that would enable them to rebuild civilization in case of a near-extinction event, keeping some shelters constantly full of people, increasing food reserves, and building more shelters. A philanthropist could pay to improve existing shelter networks in the above ways, or they could advocate for private shelter builders or governments to make some of the improvements listed above.”
14. Discovering Previously Unknown Existential Risks
The Importance of Unknown Existential Risks (@MichaelDickens)
The most dangerous existential risks appear to be the ones that we only became aware of recently. As technology advances, new existential risks appear. Extrapolating this trend, there might exist even worse risks that we haven’t discovered yet.
15. Exploring Using Insights from International Relations Theory to facilitate International Cooperation Against Existential Risks
International Cooperation Against Existential Risks: Insights from International Relations Theory (@Jenny_Xiao)
Dealing with existential risks requires international cooperation. Naturally, one might expect scholars of international relations (IR) to provide the best answers regarding how states can cooperate to protect humanity’s long-term potential. Yet reading Tody Ord’s new book The Precipice as a PhD student in IR, I am surprised how little attention my field has paid to the existential threats Toby raised in the book, such as global disease, climate change, and risks from artificial intelligence (AI).
Although mainstream IR often overlooks existential risks, it does offer insight into how to make international cooperation easier. In particular, IR theory’s emphasis on the importance of national interests offers us a realistic view of international behavior. Isaac Asimov, a universalist and humanist, once dismissed decisions based on the national interest as “emotional” reactions on “such nineteenth century matters as national security and local pride.” My view is exactly the opposite: We should work with states as they are, not what we wish them to be.
16. Reducing Risks from Whole Brain Emulation
The Age of Em (Robin Hanson)
17. Preventing/Avoiding Stable Longterm Totalitarianism
Pointer: This cause candidate has related EA forum tags: Global dystopia and Totalitarianism
Chapter “The Totalitarian Threat”, in Global Catastrophic Risks (Bryan Caplan)
18. Reducing Risks from Atomically Precise Manufacturing / Molecular Nanotechnology
Molecular machinery and manufacturing with applications to computation (Eric Drexler1991)
19. AGI Safety Research Far in Advance
[Link] A case for AGI safety research far in advance (@steve2152)
“Among other things, I make a case for misaligned AGI being an existential risk which can and should be mitigated by doing AGI safety research far in advance.”
20. Evolutionary AI alignment
This post discusses the possibility of simulating human evolution as a new approach to the alignment problem:
“If AI alignment is intractable, but human evolution is robust in producing something close to human values, we could try to simulate/mimic human evolution to create a superintelligent successor AI.”
21. Extraterrestrial Intelligence
An EA case for interest in UAPs/UFOs and an idea as to what they are (@TheNotSoGreatFilter)
Given the current evidence of unidentified aerial phenomena, the probability that they are due to extraterrestrial crafts, and the enormous implications for our world models if this were the case, it may be reasonable to do more research on the subject.
22. Fundamental Research
Cause area: Fundamental Research (@amit.chilgunde)
The author claims that “research for the sake of research” is the best way to anticipate unknown future existential risks.
23. Universal Basic Income
“I argue that poverty alleviation would reduce both mortality and existential risk, and that, among anti-poverty programs, universal basic income has a number of advantages over targeted and in-kind benefits.”
24. Short-range Forecasting
The author argues that short-range forecasting can be useful for longtermism, if there is a coordinated effort to respond rapidly to potential crises in their early stages (for example, by creating an EA Early Warning Forecasting Center).
25. Risk from Asteroids
Risks from Asteroids (@finm)
The author gives an overview of this particular risk and explains why it is not a cause to be prioritized:
“First, the international effort to track near-Earth asteroids is potentially humanity’s most successful effort to date to directly address an existential risk. . . . Second, expanding beyond mere detection to building deflection systems probably shouldn’t be a priority right now — not just because other comparably tractable risks look far more urgent, but because deflection technology could pose risks of its own from malign use.”
26. Biosecurity
Project Ideas in Biosecurity for EAs (@Davidmanheim)
“In conjunction with a group of other EA biosecurity folk, I helped brainstorm a set of projects which seem useful, and which require various backgrounds but which, as far as we know, aren’t being done, or could use additional work. Many EAs have expressed interest in doing something substantive related to research in bio, but are unsure where to start—this is intended as one pathway to do so.”
Basing on experience from the last pandemic, this post highlights the danger posed by misinformation to future global biological catastrophic risks on the rise, and sketches some possible ways to do something about it.
The author argues that evaluation of non-pharmaceutical interventions (e.g. mask wearing, hand-washing, social distancing, etc.) is neglected and that the scale of its impact could be high.
This post gives a list of longtermist biosecurity projects. The first is about improving response time to biothreats by early detection. This could happen by setting up an Early Detection Center where a small team collects “samples from volunteer travelers around the world and then does a full metagenomic scan”.
The second project points out that most personal protective equipment (PPE)—e.g. masks, suits, etc.—has a number of disadvantages. Materials science and product design could produce better PPE than our current options, i.e. “highly effective in extreme cases, easy to use, reliable over long periods of time, and cheap/abundant”.
The third proposes better medical countermeasures against biothreats “either by 1) producing targeted countermeasures against particularly concerning threats (or broad-spectrum countermeasures against a class of threats), or by 2) creating rapid response platforms that are reliable even against deliberate adversaries”.
The fourth points out some possible ways of strengthening the Biological Weapons Convention. The fifth recommends further investigation on the advantages of sterilization technologies “that rely on physical principles (e.g. ionizing radiation) or broadly antiseptic properties (e.g., hydrogen peroxide, bleach) rather than molecular details (e.g. gram-negative antibiotics)”.
The last project is to create pandemic-proof refuges:
“Existing bunkers provide a fair amount of protection, but we think there could be room for specially designed refuges that safeguard against catastrophic pandemics (e.g. cycling teams of people in and out with extensive pathogen agnostic testing, adding a ‘civilization-reboot package’, and possibly even having the capability to develop and deploy biological countermeasures from the protected space).”
Rationality and Epistemics
Pointer: This cause candidate has its own EA Forum tag. It has seen more work on LessWrong.
1. Developing the Rationality Community
Rationality as an EA Cause Area (@casebash)
2. Progress Studies
3. Epistemic Progress
Epistemic Progress has also been suggested as a cause area, but this topic has seen more activity outside the EA Forum.
Donation timing
1. Counter-Cyclical Donation Timing
2. Patient Philanthropy
Pointer: This cause has its own EA Forum tag. For illustration purposes:
Patience and Philanthropy, by Trammell (previously “Discounting for Patient Philanthropists”)
3. Improving our Estimate of the Philanthropic Discount Rate
Estimating the Philanthropic Discount Rate (@MichaelDickens)
How we should spend our philanthropic resources over time depends on how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend less now and more later.
According to a simple model, improving our estimate of the discount rate might be the top effective altruist priority.
Other
Trivia: See Wastebasket Taxon.
1. Eliminating Email
Civilization could have better workflows around email.
2. Software Development in EA
What are some software development needs in EA causes? (@evelynciara)
Note: Post is a stub.
3. Tweaking the Algorithms which Feed People Information
Short-Term AI Alignment as a Priority Cause (@Lê Nguyên Hoang)
The post is structured in a confusing way, but a core suggestion is to tweak various current AI systems, particularly the Youtube and Facebook algorithms, to better fit EA values. However, the post doesn’t give specific suggestions of the sort a Youtube engineer could implement.
4. Positively Shaping the Development of Crypto-Assets
The article tries to analyze the promisingness of influencing the development of crypto assets from an ITN perspective, in 2018. Three of its most notable points are:
Effective Altruists should shape the implementation of any high-impact new technology,
crypto-assets constitute a new organizational technology which could solve a bunch of coordination problems, and
the use of crypto assets could result in beneficial resource redistribution.
A Democratic Currency (@MikkW)
The author outlines the creation of a new (digital) currency with a view to tackle poverty:
“[T]he major source of the currency will be in the people as a whole, with a certain fixed percentage of the value represented by the currency (that is, the market cap), being credited, on regular intervals (for example, every day), to every single person known to the currency.”
The author argues that decentralized creation of money (instead of money creation by central banks) could lead to better ways of distributing money.
5. Increasing Economic Growth
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
Growth and the case against randomista development (@HaukeHillebrandt, @Halstead)
Can we drive development at scale? An interim update on economic growth work (@smclare, @AidanGoth)
Quantifying the Impact of Economic Growth on Meat Consumption (@kbog)
6. For-Profit Companies Serving Emerging Markets
7. Land Use Reform
Pointer: This cause candidate has its own EA Forum tag. The Land Use Reform tag covers posts that discuss changes to regulations around the use of land (e.g. for housing or business development). These changes could lead to increases in economic growth and welfare in locations around the world.
Cause Area: UK Housing Policy (@GMcGowan)
The author gives an overview of the problem and presents the solution advocated by the YIMBI movement in the UK, namely a political reform reducing veto power and giving households on a street the means of allowing development by a majority vote. The concluding section lists a number of reasons for and against EA involvement in this area.
This post examines the problem posed by veto players when majorities are trying to bring forward development. Improving coordination techniques could be the way to break such deadlock not only here, but also in many other areas:
“With high uncertainty, I think that focusing a small amount of resources on improving broader coordination techniques for reducing such large deadweight losses in various areas could be a highly impactful, tractable and neglected area of research.”
8. Markets for Altruism
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
Certificates of impact (@Paul_Christiano)
9. Meta-Science
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
The Intellectual and Moral Decline in Academic Research (Edward Archer)
Prioritization in Science—current view (@EdoArad)
Eva Vivalt: Forecasting Research Results (Eva Vivalt)
10. Scientific Progress
Pointer: This cause candidate has its own EA Forum tag. However, it is mostly a stub, as far as EA Forum posts go. For illustration purposes:
How to estimate the EV of general intellectual progress (@Ozzie Goen)
Science policy as a possible EA cause area: problems and solutions (@PabloAMC)
“Creating the right incentive structures in science could make science more fluid, efficient, and painless. . . . The aim of this post is to suggest science policy as a possible research area for EAs where it might be possible to do progress that results in better science.”
11. Improving Information
A Case for Better Feeds (@Nathan Young)
A proposal to adapt information spread on EA databases to other methods of accessing it (email, twitter, RSS readers, etc.). If distribution of information is improved, there should be an increase in general impact.
This post highlights the importance of Wikipedia editing and gives useful suggestions for how this should be done.
12. Cause Prioritisation Research
The case of the missing cause prioritisation research(@weeatquince)
This post points out that there has been little progress on cause research for several years. Some difficulties relating to it are discussed, but “they are all overcomeable, and they do not make a strong case that such research is intractable”.
13. EA Art & Fiction
Pointer: This cause candidate has its own EA Forum tag. For illustration purposes:
When can Writing Fiction Change the World? (@timunderwood)
14. Corporate Giving Strategies and Corporate Social Responsibility
How we promoted EA at a large tech company (v2.0) (@jlewars)
Note: This cause area comprises two distinct areas: giving by companies, and giving by employees.
Note: Corporate Social Responsability has the potential to be used by companies to cheaply distract from their pretty terrible working conditions in their supply chains by having ineffective corporate giving strategies which donate to recipients in the developed world. Last time I checked (two years ago). See International Supply Chain Accountability.
15. Biodiversity
[Question] Neglected biodiversity protection by EA (@Danny Forest)
This post claims, without elaborating on it, that biodiversity of life should be considered worth funding. Some of the comments reason why it shouldn’t.
Answer to “Preserving natural ecosystems?” (@RafaelF)
The author gives a list of ideas for tackling this issue.
16. Population Size Reduction
The author argues that reduction of population growth should be prioritised, because it would have significant positive effects on several other EA cause areas, such as climate change, animal welfare, etc.
17. Metaverse Democratisation
“This post argues that we may now be at a tipping point that decides whether we are steering either towards a utopian or to a dystopian future of hybrid virtual/physical realities. It encourages a discussion on the assessment of the problem and brainstorming of potential solutions.”
18. Sleeping Less to Increase Lifespan
Theses on Sleep (@guzey)
The author argues that sleeping under 6 hours is perfectly healthy. If people sleep less, then their lifespan would be increased.
19. Combating Ageism
[Linkpost] Is Combatting Ageism The Most Potentially Impactful Form of Social Activism? (@JosephBronski)
The author claims that people between 15 and 17 years are “the most oppressed group in the West”.
20. S-risks
How can we reduce s-risks? (@Tobias_Baumann)
“In this post, I’ll give an overview of the priority areas that have been identified in suffering-focused cause prioritisation research to date.”
The Importance of Artificial Sentience (@Jamie_Harris)
“Artificial sentient beings could be created in vast numbers in the future. While their future could be bright, there are reasons to be concerned about widespread suffering among such entities. . . . Research may help us assess which actions will most cost-effectively make progress.”
The problem of artificial suffering (@Martin Trouilloud)
This post reviews Metzinger’s paper, Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology.
21. EA Meta
“This post goes over why we think Effective Altruism meta could be highly impactful, why CE [Charity Entrepreneurship] is well-positioned to incubate these charities, why 2021 is a good time, differences in handling EA meta compared to other causes, and potential concerns. We finish by introducing our three top recommendations for new charities in the space: exploratory altruism, earning to give +, and EA training.”
22. Prioritization research on slacktivism
There are some tasks that could be done with almost no effort, and yet they could have a lot of impact. This post notes that “[s]ome slacktivism is probably way more effective than other slacktivism, so somebody should do some prioritization research to find the best slacktivism techniques”.
Other lists of ideas
The following posts collect lots of funding ideas, many of which are novel interventions and cause areas:
E.A. Megaproject Ideas (@Tomer_Goloboy)
Milan Griffes on EA blindspots (@Gavin)
[Question] What EA projects could grow to become megaprojects, eventually spending $100m per year? (@Nathan Young)
EA Projects I’d Like to See (@finm)
EA megaprojects continued (@mariushobbhahn, @slg, @MaxRa, @JasperGeh & @Yannick_Muehlhaeuser)
The Future Fund’s Project Ideas Competition (@Nick_Beckstead, @ketanrama, @leopold, @William_MacAskill)
Appendix I: Method
I queried all forum posts using the following query at the EA forum’s GraphQL API:
{
posts(input: {
terms: {
meta: null # this seems to get both meta and non-meta posts
after: "10-1-2000"
before: "10-11-2020" # or some date in the future
}
}) {
results {
title
url
pageUrl
postedAt
}
}
}
Then, I copied them over to a document called last5000posts.txt.
The EA forum API returns a maximum of 5000 entries, but this is not a problem because it currently only has 4077 posts.
I then searched for the keywords “cause x”, “cause y”, “new cause”, “cause”, “area”, “neglected”, “promising”, “proposal”, “intervention”, “effectiveness”, “cost-effective”, using grep, a Unix/Linux tool, taking care to use the case-insensitive option (this is necessary because, although links contain the title in lowercase, links don’t always contain the full title). An example of using grep to do this is:
grep -i “cause x” last5000posts.txt >> searchoutputs.txt
which appends the results to the searchoutputs.txt file if the file exists, and otherwise creates that file.
I then looked through the posts with the “Cause-Prioritization” tag and under the most upvoted posts to see if I had missed anything. I then went through all EA Forum tags which had some relation to cause candidates and read through the relevant posts.
When I started tagging the posts I’d found, I found out about the “Less-discussed Causes” tag. I didn’t like its categorization scheme, which also included things other than cause candidates, so I continued creating my own tags. The “Less-discussed Causes″ tag had about 5 posts I wouldn’t have found. I also found many more posts which were not in the tag.
I imagine a similar method could be used to efficiently populate other tags.
Appendix II: A Note on Nomenclature
Trivia: See Soviet Nomenklatura.
Thanks to Michael Aird for pushing for clarification of the terms I’m using, and for asking exactly what this list was about.
Terms:
Cause Area: A broad category of causes. For example, “animal welfare and suffering” would be a cause area, “factory-farmed animals” and “wild animal welfare” would be slightly less-broad cause areas.
Cause: Something more specific than a cause area. For example, “analgesics for farm animals” would be a cause within the “factory-farmed animals” cause area.
Intervention, charity idea, etc.: Something more specific than a cause. For example, a ballot initiative to provide more space for factory-farmed animals, like 2018 California Proposition 12 would be an intervention. “Working to bring approval voting to Saint Louis” would be an intervention within the cause “better voting methods”, itself within the cause area “better political systems”.
Meta-intervention: An intervention that can be applied to different causes. For example, ballot initiatives.
Question: On what level of specificity am I working in this post?
In practice, it’s often hard to establish whether something is a cause area, a cause, or an intervention. For example, I’d say that “climate change” is a cause area and that “extinguishing or preventing coal seam fires” is a cause, but the original post refers to it as a cause area.
Column C —”Level of specificity”— this google sheet contains information about the categorization chosen for each cause candidate (from intervention to cause area).
This work is licensed under a Creative Commons Attribution 4.0 International License.
- Big List of Cause Candidates by 25 Dec 2020 16:34 UTC; 282 points) (
- Big List of Cause Candidates: January 2021–March 2022 update by 30 Apr 2022 17:21 UTC; 123 points) (
- We can do better than argmax by 10 Oct 2022 10:32 UTC; 113 points) (
- “Fixing Adolescence” as a Cause Area? by 23 Jan 2022 10:57 UTC; 89 points) (
- Why EA meta, and the top 3 charity ideas in the space by 6 Jan 2021 15:47 UTC; 88 points) (
- Why “cause area” as the unit of analysis? by 25 Jan 2021 2:53 UTC; 85 points) (
- Impactful Projects and Organizations to Start—List of Lists by 16 Apr 2023 17:11 UTC; 78 points) (
- Doing Good Badly? - Michael Plant’s thesis, Chapters 5,6 on Cause Prioritization by 4 Mar 2021 16:57 UTC; 75 points) (
- Brain preservation to prevent involuntary death: a possible cause area by 22 Mar 2022 12:33 UTC; 50 points) (
- We can do better than argmax by 10 Oct 2022 10:32 UTC; 49 points) (LessWrong;
- Longlist of Causes + Cause Exploration Contest by 30 Jun 2023 12:15 UTC; 46 points) (
- 9 Apr 2024 16:02 UTC; 39 points) 's comment on Jamie_Harris’s Quick takes by (
- Brain preservation to prevent involuntary death: a possible cause area by 22 Mar 2022 12:36 UTC; 39 points) (LessWrong;
- Help us find pain points in AI safety by 12 Apr 2022 18:43 UTC; 31 points) (
- EA Forum Prize: Winners for December 2020 by 16 Feb 2021 8:46 UTC; 26 points) (
- 4 Aug 2023 17:45 UTC; 26 points) 's comment on University EA Groups Need Fixing by (
- 1 Nov 2021 17:46 UTC; 25 points) 's comment on What’s the role of donations now that the EA movement is richer than ever? by (
- I’ve read the Effective Altruism Handbook. Here’s what I learned. by 17 Mar 2024 12:10 UTC; 22 points) (
- EA Updates for January 2021 by 4 Jan 2021 17:09 UTC; 19 points) (
- Forecasting Newsletter: February 2021 by 1 Mar 2021 20:29 UTC; 19 points) (
- 28 Nov 2022 21:20 UTC; 14 points) 's comment on Organized Cause Evaluation by (
- 2 Aug 2022 12:15 UTC; 14 points) 's comment on Open Thread: June — September 2022 by (
- Forecasting Newsletter: February 2021 by 1 Mar 2021 21:51 UTC; 13 points) (LessWrong;
- How do EA researchers decide on which topics to write on, and how much time to spend on it? by 31 Dec 2020 14:33 UTC; 11 points) (
- 7 Jan 2021 6:16 UTC; 6 points) 's comment on Why EA meta, and the top 3 charity ideas in the space by (
- 11 Aug 2023 20:21 UTC; 6 points) 's comment on University EA Groups Need Fixing by (
- Action Series: How can we do the most good? You tell us! by 8 Aug 2023 10:13 UTC; 4 points) (
- 8 Jan 2021 16:12 UTC; 3 points) 's comment on Why EA meta, and the top 3 charity ideas in the space by (
- 5 Mar 2021 7:53 UTC; 3 points) 's comment on Doing Good Badly? - Michael Plant’s thesis, Chapters 5,6 on Cause Prioritization by (
- 26 Dec 2020 16:12 UTC; 3 points) 's comment on Interactive exploration of LessWrong and other large collections of documents by (LessWrong;
- 課題候補のビッグリスト by 20 Aug 2023 14:59 UTC; 2 points) (
- Action Series: How can we do the most good? You tell us! by 24 Aug 2023 8:47 UTC; 1 point) (
- [Opzionale] Una lunga lista di possibili cause by 18 Jan 2023 11:28 UTC; 1 point) (
- EA Hamburg Workshop: Priorities Research and Exploring new Cause Candidates by 29 Apr 2024 21:35 UTC; 1 point) (
- 14 Jan 2021 8:40 UTC; 1 point) 's comment on A Funnel for Cause Candidates by (
There are many reasons why I think this post is good:
This post has been personally helpful to me in exploring EA and becoming familiar with the arguments for different areas.
Having resources like this also contributes to the “neutrality” and “big-tent-ness” of the Effective Altruism movement (which I think are some of the most promising elements of EA), and helps fight against the natural forces of inertia that help entrench a few cause areas as dominant simply because they were identified early.
Honestly, having a “Big List” that just neutrally presents other people’s claims, rather than a curated, prioritized selection of causes, is helpful in part because it encourages people to form their own opinions rather than deferring to others. When I look at this list of cause candidates, I see plenty of what I’d consider to be obvious duds, and others that seem sorely underrated. You’d probably disagree with me on the details, and that’s a good thing!
Finally, this post helped me realize that simply listing and organizing all the intellectual work that happens in EA can be an effective way to contribute. As a highly distributed, intellectually complex, and extremely big-tent social/academic/intellectual/philanthropic movement, there is a lot going on in EA and there is a lot of value in helping organize and explain all the different threads of thought that make up the movement. For this reason especially, it would be good to include this post in the Decadal Review, since the Decadal Review is also an attempt to wrangle and organize the recent intellectual progress of the EA movement—it’s a reasonably up-to-date map of the EA cause area landscape, all in one post! (On the downside, it might not work well in a printed book since it’s so heavy on hyperlinks.)
Awesome post, but you missed lots of cause topics and project suggestions. I kept noticing them while scrolling down and forgot most of them by now, but just off the top of my head: immigration, supervolcanoes, increasing the world’s population carrying capacity, gene editing to prevent genetic diseases, banning abortion, discovering objective moral truth, improving preference aggregation, inventing new electoral systems. (Not saying I do or do not endorse any of these, just regurgitating suggestions I’ve seen here.)
At the risk of being overly self-promotional, I have written a few posts on cause candidates that I don’t see listed here.
Unknown existential risks (not exactly a cause, but conceptually similar)
Estimating the Philanthropic Discount Rate suggests that we have some reason to believe it is extremely valuable to put more effort into estimating the philanthropic discount rate.
Charities I Would Like to See proposes four weird and neglected ideas.
Another potential cause area that’s not listed: reducing value drift (e.g., this post).
Thanks, I added the first two, as well as reducing value drift.
With regards to your four weird ideas.
Nice to see that Humane Pesticides has seen some work since then; I already had it in the list.
Happy Animal Farms / “titling the universe with rats on heroin” / “filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible” sort of feel like the same cause, just a difference in implementation. Do you have a canonical reference for the idea? Also, “universal eudaimonia” is the wrong word to use for this because it sort of implies happiness through higher virtues, maybe “universal euphoria” would be a better fit.
Notes for future reference re: Values spreading / Highly targeted values Spreading:
https://forum.effectivealtruism.org/posts/BpuTtsz6J6GBycYvJ/on-values-spreading
https://www.utilitarian-essays.com/values-spreading.html
https://rationalaltruist.com/2013/06/13/against-moral-advocacy/
https://forum.effectivealtruism.org/posts/54Cdt4BR84vDcki6i/effective-altruism-and-free-riding
Added value spreading
Added universal euphoria
Interesting list! One important cause area that I think may have missed is preventing/avoiding stable longterm totalitarianism.
Toby Ord and Bryan Caplan have both written on this—see “the Precipice” for Ord’s discussion and “The Totalitarian Threat” in Bostrom’s “Global Catastrophic Risks” for Caplan’s.
It may be worth adding these to the list as it seems that totalitarianism is fairly widely accepted as a cause candidate. Thanks for the post as well, lots of interesting ideas and links in here!
For readers who may find the following useful:
Here is a freely accessible version of Caplan’s chapter
Posts tagged totalitarianism, and some posts tagged global dystopia, are relevant
80k write briefly about this here
I very strongly upvoted this because I think it’s highly likely to produce efficiencies in conversation on the Forum, to serve as a valuable reference for newcomers to EA, and to act as a catalyst for ongoing conversation.
I would be keen to see this list take on life outside the forum as a standalone website or heavily moderated wiki, or as a page under CEA or somesuch, or at QURI.
I feel it should be pointed out that there already is a similar standalone wiki causeprioritization.org and until recently there was another similar website PriorityWiki but I think that neither of them have received much traffic.
Thanks!
Ozzie has been proposing something like that. Initially, an airtable could be nice for visualization.
Thanks for putting this together, this is great!
Can you expand a little bit on what you mean by this and how it might work? I’m not sure what you mean by ‘forecasting’ in this context.
On the first day, alexrjl went to Carl Shulman and said: “I have looked at 100 cause candidates, and here are the five I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye, and said: “these are all shit, kiddo”
On the seventh day, alexrjl came back and said: “I have read through 1000 cause candidates in the EA Forum, LessWrong, the old Felicifia forum and all of Brian Tomasik’s writtings. And here are the three I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye and said: “David Pearce already came up with your #1 twenty years ago, but on further inspection it was revealed to not be promising. Ideas#2 and #3 are not worth much because of such and such”
On the seventh day of the seventh week alexrjl came back, and said “I have scrapped Wikipedia, Reddit, all books ever written and otherwise the good half of the internet for keywords related to new cause areas, and came up with 1,000,000 candidates. Here is my top proposal”
And Carl Shulman answered “Mmh, I guess this could be competitive with OpenPhil’s last dollar”
At this point, alexrjl attained nirvana.
Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so.
Normally, this would run into incentive problems because if forecasting accuracy isn’t evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn’t that much of a problem.
Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they’re missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising.
Simultaneously, I’m interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.
Thanks. I hadn’t seen those amplification posts before, seems very interesting!
You could add this post of mine to space colonization: An Informal Review of Space Exploration—EA Forum (effectivealtruism.org).
I think the ‘existential risks’ category is too broad and some of the things included are dubious. Recommender systems as existential risk? Autonomous weapons? Ideological engineering?
Finally, I think the categorization of political issues should be heavily reworked, for various reasons. This kind of categorization is much more interpretable and sensible:
Electoral politics
Helping the Democratic Party (USA)
Vote pairing
...
Domestic policy
Housing liberalization
Expanding immigration
Capitalism
...
Political systems
Electoral reform
Statehood for Puerto Rico
...
Foreign policy and international relations
Great power competition
Nuclear arms control
Small wars
Democracy promotion
Self-determination
...
I wouldn’t use the term ‘culture war’ here, it means something different than ‘electoral politics’.
I agree that the categorization scheme for politics isn’t that great. But I also think that there is an important different between “pulling one side of the rope harder” (currently under “culture war”, say, putting more resources into the US Senate races in Georgia) and “pulling the rope sideways”, say Getting money out of politics and into charity [^1].
Note that a categorization scheme which distinguishes between the two doesn’t have to take a position on their value. But I do want the categorization scheme to distinguish between the two clusters because I later want to be able to argue that one of them is ~worthless, or at least very unpromising.
Simultaneously, I think that other political endeavors have been tainted by association to more “pulling the rope harder” kind of political proposals, and making the distinction explicitly makes it more apparent that other kinds of political interventions might be very promising.
Your proposed categorization seems to me to have the potential to obfuscate the difference between topics which are heavily politicized among US partisan lines, and those which are not. For example, I don’t like putting electoral reform (i.e., using more approval voting, which would benefit candidates near the center with broad appeal) and statehood for Puerto Rico (which would favor Democrats) in the same category.
I’ll think a little bit about how and whether to distinguish between raw categorization schemes (which should presumably be “neutral”) and judgment values or discussions (which should presumably be separate). One option would be to have, say, a neutral third party (e.g. Aaron Gertler) choose the categorization scheme.
Lastly, I wanted to say that although it seems we have strong differences of opinion on this particular topic, I appreciate some of your high quality past work, like Extinguishing or preventing coal seam fires is a potential cause area, Love seems like a high priority, the review of space exploration which you linked, your overview of autonomous weapons, and your various posts on the meat eater problem.
[^1]: Vote pairing would be in the middle, because it could be used both to trade Democrat ⇔ third party candidates and Republican ⇔ third party candidates, with third party candidates being the ones that benefit the most (which sounds plausibly good). In practice, I have the impression that exchanges have mostly been set-up for Democrat ⇔ third party trades, but if they gain more prominence I’d imagine that Republicans would invest more in their own setups.
Thanks for the comments. Let me clarify about the terminology. What I mean is that there are two kinds of “pulling the rope harder”. As I argue here:
To illustrate the point, the person who came up with the idea of ‘pulling the rope sideways’, Robin Hanson, does indeed refrain from commenting on election choices and most areas of significant public policy, but has nonetheless been quite willing to state opinions on culture war topics like political correctness in academia, sexual inequality, race reparations, and so on.
I think that most people who hear ‘culture wars’ think of the purity politics and dunking and controversies, but not stuff like voting or showing up to neighborhood zoning meetings.
So even if you keep the same categorization, just change the terminology so it doesn’t conflate those who are focused on serious (albeit controversial) questions of policy and power with those who are culture warring.
Fair enough; I’ve changed this to “Ideological politics” pending further changes.
Added the Space Exploration Review. Great post, btw, of the kind I’d like to see more of for other speculative or early stage cause candidates.
I agree that the existential risks category is too broad, and that I was probably conflating it with dangers from technological development. Will disambiguate
Great list! It reminds me of Peter McClusky’s “Future of Earning to Give” post showing that there is plenty of room for more funding of high impact projects.
I would like to see more about ‘minor’ GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1.
I don’t think this is remotely justified. The arguments I’ve seen are generally of the form ‘we’ll still be able to salvage enough resources to theoretically recreate any given technology’, which doesn’t mean we can get anywhere near the economies of scale needed to create global industry on today’s scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we’re a reliably spacefaring civilisation, well beyond today’s technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life’s potential).
If the chance of recovery from any given backslide is 99%, then that’s only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn’t that big a deal (Toby Ord’s The Precipice puts an order of magnitude or two between the probability of many of the existential risks we’re typically concerned with).
Things I would like to see more discussion of in this area:
General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
Kessler syndrome
Solar storm disruption
CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed—in Toby’s book he details some extreme scenarios where a lot of CO2 could be released that wouldn’t necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
CO2 emissions from fossil fuels and other climate change substantially reducing IQs
Various ‘normal’ concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
Other concerns that I don’t know of, or that no-one has yet thought of, that might otherwise be dismissed by committed X-riskers as ‘not a big deal’
Congratulations on winning the comment award! I definitely agree we should broaden the scenarios at which we look. You can see some work on the long term future impact of lesser catastrophes here and here.
Yes, and other catastrophes that could disrupt electricity/industry, such as high-altitude detonation of a nuclear weapon causing an electromagnetic pulse, coordinated cyber attack on electricity (perhaps narrow AI enabled), or an extreme pandemic causing the desertion of critical jobs may be important to work on.
Even 7000 ppm (0.7%) CO2 only has mild effects, and this is much higher than is plausible for Earth’s atmosphere in the next few centuries.
It is possible that overreaction to these could cause large enough increases in prices to make poor of the world significantly worse off, which could cause political instability and eventually lead to something like nuclear war. But I think it is much lower probability than those that could directly reduce food supply abruptly by order of magnitude 10%.
I think the moderate climate change, perhaps 2°C over a century, is difficult to find a direct route to a collapse. However, it would make a 10% food production shortfall from extreme weather more likely. And there are many other catastrophes that could plausibly produce a 10% food production shortfall, such as:
1 Abrupt climate change (10 C loss over a continent in a decade, which has happened before)
2 Extreme climate change that is slow (~10 C over a century)
3 Volcanic eruption like Tambora (which caused the year without a summer in 1816: famine in Europe)
4 Super weed that out-competes crops, if a coordinated attack
5 Super crop disease, if a coordinated attack
6 Super crop pest (animal), if a coordinated attack
7 Losing beneficial bacteria abruptly
8 Abrupt loss of bees
9 gamma ray burst, which could disrupt the ozone layer
This could be a 10% infrastructure destruction, so I think it could destabilize. Disruption of the Internet for an extended period globally could also cut off a lot of essential services.
Even if the Flynn effect has stalled in developed countries (has it?), I still think globally over this century we are going to have a massive positive Flynn effect as education levels rise.
Agreed, which is a reason that resilience and response are also important.
I agree that I’d like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.
I think from where I’m standing, direct, “normal” X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there’s a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models.
One issue regarding a Big List is figuring out what specifically should be proposed. I’d encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)
Why would research on ‘minor’ GCRs like the ones mentioned by Arepo be harder than eg AI alignment?
My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
Is the argument that they are less neglected?
My point was just that understanding the expected impact seems more challenging. I’d agree that understanding the short-term impacts are much easier of those kinds of things, but it’s tricky to tell how that will impact things 200+ years from now.
Write a post on which aspect? You mean basically fleshing out the whole comment?
Yes, fleshing out the whole comment, basically.
Related—Problem areas beyond 80,000 Hours current priorities (Jan 2020).
From there, at least Migration Restrictions and Global Public Goods seem to be missing from this list
.
I recommend changing the “climate change” header to something a bit broader (e.g.”environmentalism” or “protecting the natural environment”, etc.). It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental concerns in the public imagination. While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.
A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/posts/YEkyuTvachFyE2mqh/trying-to-help-coral-reefs-survive-climate-change-seems
With regards to coral reefs, your post is pretty short. In my experience, it’s more likely that people will pay more attention to it if you flesh it out a little bit more.
Yeah… it’s not at all my main focus, so I’m hoping to inspire someone else to do that! :)
Yeah, this makes sense, thanks.
A cause candidate: risks from whole brain emulation
https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors#Whole_brain_emulation
In that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?
Sorry, you’re right; the link I provided earlier isn’t very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).
Ok, cheers, will add.
One other cause-enabler I’d love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it’s still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.
It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won’t invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.
Cool! Like the list, like the forecasting project idea.
Tiny suggestion, I think “Air Purifiers Against Pollution” shouldn’t go into the Climate Change basket, and instead probably to Global Health & Development.
Thanks! I like the kind comments you leave under my posts; they brighten my day.
Aw, really glad to hear that!
Changed the “Air Purifiers Against Pollution”
What an excellent resource! Thanks for assembling such a comprehensive and wide-ranging list.
Suggested formatting correction: two sub-causes are at the wrong heading level— 7. CO2 Sensors and 8. Hurricanes and shouldn’t be H2 but the equivalent of H4 for consistency.
Thanks! I’m not sure when it will next be updated, but when it is I’ll fix the error.
Thread for changelogs
Changelog 23rd Jan/2021
Added “Decline of the US”.
Added “Corporate Social Responsibility & Corporate Giving Strategies” under “Community Building”.
Added RP Work Trial Output: How to Prioritize Anti-Aging Prioritization—A Light Investigation to Ageing.
Added Stubble Burning in India to “Global Health and Development”
Added Why are party politics not an EA priority? to “Politics: Local politics”
Added Exploring Using Insights from International Relations Theory to facilitate International Cooperation Against Existential Risks to “Existential Risks”
Added Trying to help coral reefs survive climate change seems incredibly neglected. to “Climate Change”
Did not change “Climate Change” to “Environmentalism” per @capybaralet’s suggestion because there aren’t any environmentalism suggestions yet which don’t also apply to Climate Change.
Added “Reducing Risks from Whole Brain Emulation” to “Existential Risks”
Added “Preventing/Avoiding Stable Longterm Totalitarianism” to “Existential Risks”
Added “Reducing Risks from Atomically Precise Manufacturing / Molecular Nanotechnology” to “Existential Risks” (category unclear, it could even be under “Global Health and Development”)
Added “Water, Sanitation and Hygiene Interventions” to “Global Health and Development”
Added https://www.lesswrong.com/tag/wireheading to “Universal Euphoria”.
Notes:
I don’t like “Politics: System Change, Targeted Change, and Policy Reform” as a category. I’m thinking of dividing it into several subcategories (e.g., “Politics: Systemic Change”, “Politics: Mechanism Change”, “Politics: Policy Change” and “Politics: Other”.) I’d also be interested in more good examples of systemic change interventions, because the one which I most intensely associate with it is something like “marxist revolution”.
Hattip to @Prabhat Soni for suggesting risks from whole brain emulation, atomically precise manufacturing, infodemics, cognitive enhancement, universal basic income, and the lesswrong tag for wireheading.
To do:
Think about adding “Cognitive Enhancement” as a cause area. See Bostrom here. Unclear to what extent it would be distinct from “Raising IQ”
Think about adding “Infodemics and protecting organisations that promote the spread of accurate knowledge like Wikipedia.”. In particular, think if there is a more general category to which this belongs.
Tag these and add them to the google doc.
Follow up with the people who suggested these candidates.
To do:
Add https://forum.effectivealtruism.org/posts/MjWHupe8d8mMGJqKP/ea-and-the-possible-decline-of-the-us-very-rough-thoughts as a cause candidate.
Add risks from whole Brain emulation.
Add preventing/avoiding stable longterm totalitarianism.
Add atomically precise manufacturing / molecular nanotechnology
Add https://forum.effectivealtruism.org/posts/FAA22RbfgC68fRnRs/if-you-mostly-believe-in-worms-what-should-you-think-about
Done. From now on, this to do will be at the end of my “changelogs”
Transhumanism would benefit from also exploring human augmentation. Thanks for the list.
Can you give a bit more of an explanation about the scoring in the google sheet? E.g. time horizon, readiness, promisingness etc.
I was slightly disappointed to see such low scores for my idea of philosophy in schools (but I guess I should have realised by now that it’s not cause X!). I’m not sure I agree with ‘time horizon’ being ‘very short’ though given that some of the main channels through which I hope the intervention would be good are in terms of values spreading (which you rate as ‘medium’) and moral circle expansion (which you rate as ‘long’). The whole point of my post was to argue for this intervention from a longtermist angle and it was partly in response to 80,000 Hours listing ‘broadly promoting positive values’ as a potential highest priority. So saying time horizon is ‘very short’ is a sign that you didn’t engage with the post at all, or (quite possibly!) that I’ve misunderstood something quite important. If you do have some specific feedback on the idea I’d appreciate it!
A post about this is incoming.
With respect to philosophy in schools in particular:
Why I’m not excited about it as a cause area:
Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea.
I believe that there aren’t enough excellent philosophy teachers for it to be implemented at scale.
I don’t give much credence to the papers you cite replicating at scale.
On the above two points, see Khorton’s comments in your post.
To elaborate a bit on that, there are some things on the class of “philosophy in schools” that scale really well, like, say, CBT. But I expect that “philosophy in schools” would scale like, say, budhist meditation (i.e., badly without good teachers).
Philosophy seems like a terrible field. It has low epistemic standards. It can’t come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
Philosophy in schools meshes badly with religion and it’s easy for the curriculum to become political.
I imagine that teaching utilitarianism at scale in schools is not very feasible.
I’d expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.
Why I categorized it as “very-short”:
If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
You in fact propose getting into party-politics as a way to implement “philosophy in schools”
When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn’t strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
One of your arguments was: “One reason why it might be highly impactful for philosophy graduates to teach philosophy is that they may, in many cases, not have a very high-impact alternative.” This doesn’t strike me as a consideration that will last for generations (though, you never know with philosophy graduates)
That said, I can also see why classifying it as longer term would make sense.
OK thanks for this reply! I think some of this is fair and as I say, I’m not clinging to this idea as being hugely promising. Some of your comments seem quite personal and possibly contentious, but then again I don’t know what the context of the scoring is so maybe that’s sort of the idea at this stage.
A few specific thoughts.
OK this seems fairly personal and anecdotal (as I said maybe this is fine at this stage but I hope this sort of analysis wouldn’t play a huge role in scoring at a later stage).
Not sure what point you’re making here (I also know this EA by the way).
Perhaps fair! We could always train more teachers though.
Hmm. Well I at least feel fairly confident that a lot of people will disagree with you here. And any good curriculum designer should leave out the crap. My experience with philosophy has led me to go vegan, engage with EA and give effectively (think Peter Singer type arguments). I’ve found it quite important in shaping my views and I’m quite excited by the field of global priorities research which is essentially econ and philosophy.
If you teach philosophy, you will probably spend at least a little bit of time teaching utilitarianism within that. Not really sure what you’re saying here.
It’s teaching philosophy, not teaching values. In the post I don’t suggest we include EA explicitly in the curriculum. In any case, EA is the natural conclusion of a utilitarian philosophy and I would expect any reasonable philosophy curriculum to include utilitarianism.
Ok interesting. I didn’t really consider that its inclusion might just be overturned by another party. From my personal experience you don’t get subjects being dropped very often and so I was hopeful for staying power, but maybe this is a fair criticism.
OK fine this (and your later comments) was probably me just not knowing what you meant by ‘time horizon’.
Yeah, this is fair. Ideally I’d ask a bunch of people what their subjective promisingness was, and then aggregate over that. I’d have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.
I’m not sure why your instinct is to go by your own experience or ask some other people. This seems fairly ‘un-EA’ to me and I hope whatever you’re doing regarding the scoring doesn’t take this approach.
I would go by the available empirical evidence, whilst noting any likely weaknesses in the studies. The weaknesses brought up by Khorton (and which you referenced in your comment) were actually noted in the original empirical review paper, which said the following regarding the P4C process:
“Many of the studies could be criticized on grounds of methodological rigour, but the quality and quantity of evidence nevertheless bears favourable comparison with that on many other methods in education.”
“It is not possible to assert that any use of the P4C process will always lead to positive outcomes, since implementation integrity may be highly variable. However, a wide range of evidence has been reported suggesting that, given certain conditions, children can gain significantly in measurable terms both academically and socially through this type of interactive process.”
“further investigation is needed of wider generalization within and beyond school, and of longer term maintenance of gains”
My overall feeling on scale was therefore that it was ‘promising’ but still unclear. I’m not impressed with just giving scale rating = 1 based on personal feeling/experience to be honest. Your tractability points possibly seem more objective and justified.
From where I’m sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don’t really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements.
I’m quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don’t cover the thing we’re really interested in, and often they don’t even replicate.
My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they’d be similarly skeptical to Nuño here.
All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy.
I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for.
For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one’s epistemic abilities, and measuring educational interventions on such tests.
Hey, fair enough. I think overall you and Nuno are right. I did write in my original post that it was all pretty speculative anyway. I regret if I was too defensive.
I think those proposals sound good. I think they aim to achieve something different to what I was going for as I was mostly going for a “broadly promote positive values” angle on a societal level which I think is potentially important from a longtermist point of view, as opposed to educating smaller pockets of people, although I think the latter approach could be high value.
I can imagine reconsidering, but I don’t in principle have anything against using my S1. Because:
It is fast, and I am rating 100+ causes
From past experience with forecasting, I basically trust it.
It does in fact have useful information. See here for some discussion I basically agree with.
OK I mean you can obviously do what you want and I appreciate that you’ve got a lot of causes to get through.
I don’t place that much stock in S1 when evaluating things as complex as how to do the most good in the world. Especially when your S1 leads to comments such as:
Philosophy seems like a terrible field—I’d imagine you’re in the firm minority here and when that is the case I’d imagine it’s reasonable to question your S1 and investigate further. Perhaps you should do a critique of philosophy on the forum (I’d certainly be interested to read it). There are people who have argued that philosophy does make progress and that it may not be as obvious, as philosophical progress tends to spawn other disciplines that then don’t call themselves philosophy. See here for a write-up of philosophical success stories. In any case what I really care about in a philosophical education is teaching people how to think (e.g. Socratic questioning, Bayesian updating etc.), not get people to become philosophers.
I also studied philosophy at university and overall came away with a mostly negative impression—I mean, what about all the people who don’t come away with a negative impression? They seem fairly abundant in EA.
I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn’t seem central to that idea—I still don’t get this comment to be honest. In my opinion the EA you speak of isn’t doing something similar to what I propose, and even if they were, why would the fact that they don’t see philosophy as central to what they’re doing mean that teaching philosophy would obviously fail?
Anyway I won’t labour the point much more. 43 karma on my philosophy in schools post is a sign it isn’t going to be revolutionary in EA and I’ve accepted that, so it’s not that I want you to rate it highly, it’s just that I’m sceptical of your process of how you did rate it.
Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.
From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn’t in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren’t utilitarian.
Philosophy doesn’t have high epistemic standards, as evidenced by the fact that it can’t come to a conclusion about “who is right”. Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you’re right would be near the top)
Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted “critical thinking” would be beneficial, but I don’t think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don’t think that there are enough good teachers who could pull it off.
Note that I’m not saying that philosophy can’t produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I’m saying that most of the time, the average philosophy class is pretty mediocre
On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn’t really have great teaching skills.
tl;dr/Notes:
I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I’m using the same “muscles” as when forecasting, and I trust that those muscles will usually produce sensible conclusions.
It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question “will a charity be incubated to work on philosophy in schools” (surprise reveal: this is similar to what I was doing all along), I imagine I’d give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we’d both probably move towards the center, and thus be more accurate.
Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a “very unpromising” rating), we’d expect it to both be unpromising (per your own view) and have a large error term (I clearly don’t view philosophy very favorably)
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I strongly don’t think a charity to work on philosophy in schools would be helpful and I don’t like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
This is a whole separate conversation that I’m not sure we have to get into right now too deeply (I think I’d rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt’s research). More importantly and fundamentally, the problem of complex cluelessness (see here and here). It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.
I’m hoping we’re nearing a good enough understanding of each other’s views that we don’t need to keep discussing for much longer, but I’m happy to continue a bit if helpful.
Acknowledged.
A cause candidate suggestion: atomically precise manufacturing / molecular nanotechnology. Relevant EA Forum posts on this topic:
https://forum.effectivealtruism.org/posts/gjEbymta6w8yqNQnE/risks-from-atomically-precise-manufacturing
https://forum.effectivealtruism.org/posts/gzj9Np8WWpKgCJgiW/is-nanotechnology-such-as-apm-important-for-eas-to-work-on
Thanks, will add (but not rn)
I am writing on a post about “better/healthier diets” simply due to their effect on human health. I hope it will be out during the next weeks. - I have to wait for some feedback by experts on this topic.
I wish we could finally strike off cryonics from the list. The most popular answers in the linked ‘Is there a hedonistic utilitarian case for Cryonics? (Discuss)’ essay seem to be essentially ‘no’.
The claim that ‘it might also divert money from wealthy people who would otherwise spend it on more selfish things’ gives no reason to suppose that spending money on yourself in this context is somehow unselfish.
As for ‘Further, cryonics might help people take long-term risks more seriously’. Sure. So might giving people better health, or, say, funding long-term risk outreach. At least equally as plausibly to me, constantly telling people that they don’t fear death enough and should sign up for cryonics seems likely to make people fear death more, which seems like a pretty miserable thing to inflict on them.
I just don’t see any positive case for this to be on the list. It seems to be a vestige of a cultural habit among Less Wrongers that has no place in the EA world.
The goal of this list was to be comprehensive, not opinionated. We’re thinking about ways of doing ranking/evaluation (particularly, with forecasting) going forward. I’d also encourage others to give it their own go, it’s a tricky problem.
One reason to lean towards comprehension is to make it more evident which causes are quite bad. I’m sure, given the number, that many of these causes are quite poor. Hopefully systematic analysis would both help identify these, and then make a strong case for their placement.
Then I would suggest being more clear about what it’s comprehensive of, ie by having clear criteria for inclusion.
The Cause Candidates tag has these criteria. You’ll note that Cryonics qualifies, as would e.g. each of kbog’s political proposals, even though I vehemently disagree with them. I think that the case for this is similar to the case in Rule Thinkers In, Not Out
Can you spell both of these points out for me? Maybe I’m looking in the wrong place, but I don’t see anything in that tag description that recommends criteria for cause candidates.
As for Scott’s post, I don’t see anything more than a superficial analogy. His argument is something like ‘the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea’. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?
Right, the criteria in the tag are almost maximally inclusive (“posts which specifically suggest, consider or present a cause area, cause, or intervention. This is independent of the quality of the suggestion, the community consensus about it, or the level of specificity”). This is because I want to distinguish between the gathering step and the evaluation step. I happen to agree that cryonics right now doesn’t feel that promising, but I’d still include it because some evaluation processes might judge it to be valuable after all. Incidentally, this has happened before for me, seeing an idea which struck me as really weird and then later coming to appreciate it (fish welfare)
Per Scott Alexander’s post, considering the N least promising cause candidates in my list would be like a box which has a low chance of producing a really good idea. It will fail most of the time, but produce good ideas otherwise.
Also, cryonics has been discussed in the context of EA, one just has to follow the links in the post:
https://www.overcomingbias.com/2010/07/cryonics-as-charity.html
https://www.lesswrong.com/posts/Q7PFyobNPwqBsma9g/effective-altruism-and-cryonics-contest-results