AMA: Rethink Priorities’ Worldview Investigation Team
Rethink Priorities’ Worldview Investigation Team (WIT) will run an Ask Me Anything (AMA). We’ll reply on the 7th and 8th of August. Please put your questions in the comments below!
What’s WIT?
WIT is Hayley Clatterbuck, Bob Fischer, Arvo Munoz Moran, David Moss, and Derek Shiller. Our team exists to improve resource allocation within and beyond the effective altruism movement, focusing on tractable, high-impact questions that bear on strategic priorities. We try to take action-relevant philosophical, methodological, and strategic problems and turn them into manageable, modelable problems. Our projects have included:
The Moral Weight Project. If we want to do as much good as possible, we have to compare all the ways of doing good—including ways that involve helping members of different species. This sequence collects Rethink Priorities’ work on cause prioritization across different kinds of animals, human and nonhuman. (You can check out the book version here.)
The CURVE Sequence. What are the alternatives to expected value maximization (EVM) for cause prioritization? And what are the practical implications of a commitment to expected value maximization? This series of posts—and an associated tool, the Cross-Cause Cost-Effectivesness Model—explores these questions.
The CRAFT Sequence. This sequence introduces two tools: a Portfolio Builder, where the key uncertainties concern cost curves and decision theories, and a Moral Parliament Tool, which allows for the modeling of both normative and metanormative uncertainty. The Sequence’s primary goal is to take some first steps toward more principled and transparent ways of constructing giving portfolios.
In the coming months, we’ll be working on a model to assess the probability of digital consciousness.
What should you ask us?
Anything! Possible topics include:
How we understand our place in the EA ecosystem.
Why we’re so into modeling.
Our future plans and what we’d do with additional resources.
What it’s like doing “academic” work outside of academia.
Biggest personal updates from the work we’ve done.
Acknowledgments
This post was written by the Worldview Investigation Team at Rethink Priorities. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.
Has the moral uncertainty inherent in your work influenced your day-to-day decision-making or personal philosophy?
I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas.
I share your inclination toward significant diversification. However, I find myself grappling with the question of whether there should be specific limits on this diversification. For instance , Open Philanthropy’s approach seems to be “we diversify amongst worldviews we find plausible,” but it’s not clear to me what makes a worldview plausible. How seriously should we consider, for example, Nietzscheanism?
After working on WIT, I’ve grown a lot more comfortable producing provisional answers to deep questions. In similar academic work, there are strong incentives to only try to answer questions in ways that are fully defensible: if there is some other way of going about it that gives a different result, you need to explain why your way is better. For giant nebulous questions, this means we will make very slow progress on finding a solution. Since these questions can be very important, it is better to come up with some imperfect answers rather than just working on simpler problems. WIT tries to tackle big important nebulous problems, and we have to sometimes make questionable assumptions to do so. The longer I’ve spent here, the more worthwhile our approach feels to me.
Excellent question, Ian! At a high level, I’d say that moral uncertainty has made me much more inclined to care about having an overlapping consensus of reasons for any important decision. Equivalently, I want a diverse set of considerations to point in the same direction before I’m inclined to make a big change. That’s how I got into animal work in the first place. It’s good for the animals, good for human health, good for long-term food security, good for the environment, etc. There are probably lots of other impacts too, but that’s the first one that comes to mind!
Has anyone on the team changed their mind about their priorities/ certainty levels because of the output of one of your tools?
A few things come to mind. First, I’ve been really struck by how robust animal welfare work is across lots of kinds of uncertainties. It has some of the virtues of both GHD (a high probability of actually making a difference) and x-risk work (huge scales). Second, when working with the Moral Parliament tool, it is really striking how much of a difference different aggregation methods make. If we use approval voting to navigate moral uncertainty, we get really different recommendations than if we give every worldview control over a share of the pie or if we maximize expected choiceworthiness. For me, figuring out which method we should use turns on what kind of community we want to be and which (or whether!) democratic ideals should govern our decision-making. This seems like an issue we can make headway on, even if there are empirical or moral uncertainties that prove less tractable.
I was personally struck by how sensitive portfolios are to even modest levels of risk aversion. I don’t know what “correct” level of risk aversion is, or what the optimal decision procedure is in practice (even though most of my theoretical sympathies lie with expected value maximisation). Even so, seeing how introducing bits of risk aversion, even when using parameters relatively generous towards x-risk, still points towards spending most resources on animals (and sometimes global health) has led me to believe that type of work is robustly better than I used to think. There are many uncertainties and I don’t think EA should be reduced to any one of its cause-areas but, especially given this update, I would be sad to see the animal space shrink in relative size any more than it has.
One of the big prioritization changes I’ve taken away from our tools is within longtermism. Playing around with our Cross-Cause Cost-Effectiveness Model, it was clear to me that so much of the expected value of the long-term future comes from the direction we expect it to take, rather than just whether it happens at all. If you can shift that direction a little bit, it makes a huge difference to overall value. I no longer think that extinction risk work is the best kind of intervention if you’re worried about the long-term future. I tend to think that AI (non-safety) policy work is more impactful in expectation, if we worked through all of the details.
I’m a “chickens and children” EA, having come to the movement through Singer’s arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where it’s clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesn’t appeal to me at all personally. So you probably won’t catch me pivoting to AI governance anytime soon, but I’m glad others are doing it.
Have you considered writing single blog post summaries of your projects? I suspect that this would greatly increase their influence within the wider EA community because only the people who are most interested in a topic are likely to read a whole sequence on it?
Thanks for your question, Chris. We hear you about the importance of making the content accessible. We’ve aimed to include the main takeaways in intro and conclusion posts that can be easily skimmed. We also provide an executive summary at the beginning of each post. We hope that these help, but we take the point that it may not be obvious that we’ve taken these steps, and we’ll revisit this suggestion in future sequences to make sure the purposes of those posts and introductory materials are clear. It may also be useful for us to consider more visual summaries of some of our results, as we provided for our discussion of human extinction. Do you have any concrete suggestions given the approach we’ve adopted so far?
That seems reasonable. I guess the downside is that some people might see the sequence and think it’s too much so never click into it and realise that structure. But obv. there are trade-offs here in that time spent writing single post summaries could be spent elsewhere.
Do you have any thoughts about how we could highlight more clearly that the main posts introducing the tools (Portfolio and Parliament) are key things people should read if they just want to read one thing?
Not to downplay the value of the other posts in the sequence (and I think the other technical supplements are useful too), but I think just reading those posts (and using the tools!) would capture most of the value of the sequence for most people.
No strong thoughts.
Have you ever considered interacting with policy institutes/political commissions (EU commission, national parliaments, etc) to spread the word about effective allocation of resources, similar trends that could be followed by some governmental departments?
The second one is more daring, but I’m curious. How much does OpenPhilantropy and its council of advisors rely/apply your advice? For example, you wrote a very interesting sequence on value maximisation and one insight was that animal welfare was a winner on the short and longterm, but that does not translate at all in OP current funding allocation given the recent reductions in animal welfare budget/grant-criteria tightening when it comes to animal projects?
Great questions, Vaipan. To your first question, the short answer is “Yes.” We can’t say much about our efforts in that direction, but we’re certainly keen to have broad influence! To your second question, we’ve received positive reviews of our work from several people at OP and have had lots of productive discussions about the place of risk aversion in resource allocation. However, we can’t comment on OP’s internal decision-making.
Do you have a sense of how impactful your work has been so far (in particular the work that has been out longer, like the Moral Weight Project and the CCM tool)? I’d be interested to hear specific impact stories, if you can share them.
Thanks for all you do—I think it’s really cool! :)
Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat!
Do you plan to conduct empirical work on either of the tools you’ve released recently? Interested to hear any reasons you think this would or wouldn’t be especially valuable!
Thanks for the question Carter! Would you mind saying a bit more about the kind of empirical work you have in mind? Are you thinking about empirical research into the inputs to the tools? Or are you thinking about using the tools to conduct research on people’s views about cause prioritization? Do you have any concrete empirical projects you’d like to see WIT do?
I was imagining you could use the tools to assess people’s views about cause prioritization! In particular, I’m not sure whether you record users’ responses when they use either tool, but I’d be interested in seeing these data. It may also be valuable to recruit a more representative sample to see how most people react to moral uncertainty or otherwise engage with the tools.
Of course, I think a limitation in both these cases is that most people are pretty unfamiliar with moral uncertainty, and so a) probably a lot of people who use both tools are simply testing assumptions out and not necessarily expressing their true views, and b) I’m not sure whether recruiting people without a philosophical background would yield high-quality data. These might mean it’s not worth the effort, but I’m curious what the team’s thoughts are!
Although you have addressed the question of uncertainty in recent work, I am not seeing it implemented fully in your tools. I’d like to see an in-depth treatment (and incorporation into your tools) of the position stated by Andreas Mogensen in his paper ‘Maximal Cluelessness’, Global Priorities Institute Working Paper No. 2/2019:
“We lack a compelling decision theory that is consistent with a long-termist perspective and does not downplay the depth of our uncertainty while supporting orthodox effective altruist conclusions about cause prioritisation.”
In my view, if one accepts 100% the implications of maximal cluelessness (which is ever more strongly supported by dynamical systems and chaos theory, the more longtermist the perspective), then the logical conclusion from that position is to fund projects randomly, with random amounts.
The RP team may wish to consider prioritising the study of complexity and dynamical systems etc. as part of their continuing professional development (CPD). I recommend the courses offered by the Santa Fe Institute. You can register for most courses at any time, but the agent-based modelling course requires registration and starts at the end of August: https://www.complexityexplorer.org/courses/183-introduction-to-agent-based-modeling
Stephen Hawking famously once said that the 21st century would be the century of complexity. I wholeheartedly agree. IMHO, in these non-linear times, it should be a part of every scientist’s (and philosopher’s) basic education.
Thanks for raising this point. We think that choosing the right decision theory that can handle imprecise probabilities is a complex issue that has not been adequately resolved. We take the point that Mogensen’s conclusions have radical implications for the EA community at large and we haven’t formulated a compelling story about where Mogensen goes wrong. However, we also believe that there are likely to be solutions that will most likely avoid those radical implications, and so we don’t need to bracket all of the cause prioritization work until we find them. Our tools may only be useful to those who see there to be work done on cause prioritization.
As a practical point, our Cross-Cause Cost-Effectiveness Model handles precise probabilities with Monte Carlo methods by randomly selecting individual values for parameters in each outcome from a distribution. We noted hesitance about enforcing a specific distribution over our range of radical uncertainty, but we stand behind this as a reasonable choice given our pragmatic aims. If the alternative is not to try to calculate relative expected values, we think that would be a loss, even if our own results have methodological doubts still attached to them.
What do you think people in the EA community get wrong (or fail to sufficiently consider) when it comes to cause prioritisation?
Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it’s obvious how they’ll come out.
Is there any writing from RP or anywhere else that describes these flaws in more depth, or actually runs the numbers on EV calculations and x-risk?
Yes! I recommend starting with this and this.
Thank you!
I think another common pitfall is not working through things from first principles. I appreciate that it’s challenging and that any model is unrealistic. Still, BOTECs, pre-established boundaries between cause-areas/worldviews and our first instincts more broadly are likely to (and often do) lead us astray. Separately, I’m glad EA is so self-aware and worried about healthier epistemics, but I think we could do more to guard against echo-chamber thinking.
I agree that thinking from first principles can be great but, as I’m sure you’re aware, it’s super difficult! Do you have any thoughts on encouraging and/or facilitating more of this kind of thinking in the community?
That’s fair. The main thought that came to mind, which might not be useful, is developing the patience (eagerness to get to conclusions is often incompatible with the work required) and choosing your battles early. As you say, it can be hard and time-consuming. So people in the community asking narrower questions and focusing on one or two is probably the way to go.
What are selfish lifestyle reasons to work on the WIT team?
Is it fair to say the work WIT does is unusual outside of academia? What are closely related organizations that tackle similar problems?
How does your team define “good enough” for a sequence? What adjustments do you make when you fall behind schedule? Cutting individual posts? Shortening posts? Spending more time?
How much does the direction of a sequence change as you’re writing it? It seems like you have a vision in mind when starting out, but you also mention being surprised by some results.
Can you tell us more about the structure of research meetings? How frequently do individual authors chat with each other and for what reason? In particular, the CURVE sequence feels very intentionally like a celebration of different “EA methodologies”. Most of the posts feel individual before converging on a big cost-effectiveness analysis.
Much of your work is numerical simulation over discrete choices. Have there been attempts to define more “closed-form” analytical equations? What are pros and cons here?
What are the main constraints the WIT team faces?
What are selfish lifestyle reasons to work on the WIT team?
It’s fun to talk to smart people! Remote work is great. It’s a privilege to be able to think about big problems that are both philosophically complicated and practically important.
Is it fair to say the work WIT does is unusual outside of academia? What are closely related organizations that tackle similar problems?
Yes, what we do is very unusual outside of academia—and inside it too. Re: other groups that do global priorities research, the most prominent ones are GPI, PWI, and the cause prio teams at OP.
How does your team define “good enough” for a sequence? What adjustments do you make when you fall behind schedule? Cutting individual posts? Shortening posts? Spending more time?
That’s a hard one and we’re still trying to figure it out. There are a lot of variables here, many of which are linked to whether we have the funding to linger on a particular project. In general, however, our job isn’t to produce academic research: it’s to inform decisions. So, if we think we’ve done enough to help people who need to make decisions, then that’s a good sign that we should wrap up the project soon.
How much does the direction of a sequence change as you’re writing it? It seems like you have a vision in mind when starting out, but you also mention being surprised by some results.
The general structure tends not to change much—we plan out posts together and have a general sense of the research we want to do—but the narrative certainly evolves as we learn more about the topic we’re investigating. The conclusions definitely aren’t set from the beginning!
Can you tell us more about the structure of research meetings? How frequently do individual authors chat with each other and for what reason? In particular, the CURVE sequence feels very intentionally like a celebration of different “EA methodologies”. Most of the posts feel individual before converging on a big cost-effectiveness analysis.
We’re in touch all the time, brainstorming new ideas, reviewing drafts, and figuring out solutions to problems. The whole team meets once or twice a week and then we individually hop on 1-1 calls more frequently to discuss specific aspects of our projects. Most of the research still has a lead who’s driving it forward, but everyone’s fingerprints tend to be on everything.
Much of your work feels numerical simulation over discrete choices. Have there been attempts to define “closed-form” analytical equations for your work? What are reasons to allocate resources to this versus not?
This ties to your previous question “How does your team define “good enough” for a sequence?”. We think analytical equations can be valuable (they are often tidier, speed up computational work, and can provide clearer insights into sensitivity analysis). For example, it’s a natural next step in our human extinction post, which we flagged in the conclusion. And indeed we’ve done some work towards this already but not polished it enough for it to be in a shareable state. Back to your question “when is a piece of research good enough to wrap up?” We don’t know for sure, but we’ve found that running computational simulations that we’re sufficiently confident in gives us approximations that are perfectly suitable to learn about the models we’re interested in. We hear you, closed-form solutions are mathematically satisfying. But, once we’ve learned the main headlines, it’s hard to justify spending the extra time working through closed-form solutions for everything, especially for some of the more complex models with several moving parts.
What are the main constraints the WIT team faces?
The standard ones: we’re funding- and capacity-constrained. We could do a lot more with additional resources!
I’ve played with your moral parliament tool- really cool. I don’t think I’ve seen something like this in philosophy before, where a team of researchers are creating products that aren’t simply papers.
Since you are making products though- do you carry out user interviews/ use other methods to figure out how to best fit the product to your audience?
Relatedly, how do you hope people will use the moral parliament tool?
Thanks for these questions, Toby!
Re: fitting products to our audience, that’s one reason we release them on the Forum! All our tools are in beta; the feedback we receive here is one of the important ways we identify necessary refinements. As time and funding permit, we hope to improve our tools so that they better serve individuals and organizations trying to do as much good as they can. That being said, we also did a lot of user testing in advance, soliciting feedback on many iterations of each tool to improve usability and accessibility.
Re: how we hope people will use the Moral Parliament Tool, we have two main goals. First, we hope that people will use it to have more transparent conversations about their disagreements. For instance, when people are debating the merits of a particular intervention, is the crux the probability that an intervention will backfire, how bad they think backfiring will be, their relative aversions to backfiring, or the way they think they should navigate uncertainty given all their other commitments? The tool forces people to make these kinds of differences explicit and think through their implications. Second, we hope that people will use the Moral Parliament Tool to explore the implications of even modest levels of uncertainty. The tool makes it obvious that changes to parameter values, credences in moral theories, and aggregation methods have big consequences for overall allocations!
Any tips for running discussion groups on the WIT sequences? I’m vaguely interested in doing one on the CURVE sequence (which I’ve read deeply) or the CRAFT sequence (which I have only skimmed). However, the technical density seems like a big barrier and perhaps some posts are more key than others
Here’s one method that we’ve found helpful when presenting our work. To get a feel for how the tools work, we set challenges for the group: find a set of assumptions that gives all resources to animal welfare; find how risk averse you’d have to be to favor GHD over x-risk; what moral views best favor longtermist causes? Then, have the group discuss whether and why these assumptions would support those conclusions. Our accompanying reports are often designed to address these very questions, so that might be a way to find the posts that really matter to you.
From a complex systems perspective, the cross-cause cost effectiveness model is inadequate, since it fails to fully take into consideration or model the complex interactions and interdependencies between cause areas. Did you know, for example, that combatting inequality (a global development goal) is also a proven way of reducing carbon emissions, i.e. reducing the existential risk of climate change, which in turn would reduce biodiversity loss (an animal welfare goal)?
I invite the RP team to consider two of many similar examples:
[1] The 2019 paper published in Nature Sustainability by Nerini et al., Connecting climate action with other Sustainable Development Goals:
“Abstract
The international community has committed to combating climate change and achieve 17 Sustainable Development Goals (SDGs). Here we explore (dis)connections in evidence and governance between these commitments. Our structured evidence review suggests that climate change can undermine 16 SDGs, while combatting climate change can reinforce all 17 SDGs but under- mine efforts to achieve 12. Understanding these relationships requires wider and deeper interdisciplinary collaboration. Climate change and sustainable development governance should be better connected to maximize the effectiveness of action in both domains. The emergence around the world of new coordinating institutions and sustainable development planning represent promising progress.”
[2] Carbon emissions, income inequality and economic development
Abebe Hailemariam, Ratbek Dzhumashev, Muhammad Shahbaz
Empirical Economics 59 (3), 1139-1159, 2020
This paper investigates whether changes in income inequality affect carbon dioxide () emissions in OECD countries. We examine the relationship between economic growth and emissions by considering the role of income inequality in carbon emissions function. To do so, we use a new source of data on top income inequality measured by the share of pretax income earned by the richest 10% of the population in OECD countries. We also use Gini coefficients, as the two measures capture different features of income distribution. Using recently innovated panel data estimation techniques, we find that an increase in top income inequality is positively associated with emissions. Further, our findings reveal a nonlinear relationship between economic growth and emissions, consistent with environmental Kuznets curve. We find that an increase in the Gini index of inequality is associated with a decrease in carbon emissions, consistent with the marginal propensity to emit approach. Our results are robust to various alternative specifications. Importantly, from a policy perspective, our findings suggest that policies designed to reduce top income inequality can reduce carbon emissions and improve environmental quality.
https://link.springer.com/article/10.1007/s00181-019-01664-x
https://www.researchgate.net/profile/Abebe-Hailemariam/publication/331551899_Carbon_Emissions_Income_Inequality_and_Economic_Development/links/5c7fcb91458515831f895d32/Carbon-Emissions-Income-Inequality-and-Economic-Development.pdf
In my view, Rethink Priorities should take on board the conclusion of these and similar papers by promoting ‘wider and deeper interdisciplinary collaboration’, and incorporating the results of that collaboration in your models.
Thanks for looking through our work and for your comment, Deborah. We recognise that different parts of our models are often interrelated in practice. In particular, we’re concerned about the problem of correlations between interventions too, as we flag here. This is an important area for further work. That being said, it isn’t clear that the cases you have in mind are problems for our tools. If you think, for instance, that environmental interventions are particularly good because they have additional (quantifiable or non-quantifiable) benefits, you can update the tool inputs (including the cause or project name) to reflect that and increase the estimated impact of that particular cause area. We certainly don’t mean to imply that climate change is an unimportant issue.
As an environmentalist, even though I acknowledge that much extremely worthwhile research is being done by EA organisations, especially on AI safety, some of the work being done in other areas makes me put my head in my hands and groan in despair. The profound ignorance in effective altruist circles of environmental science, like that of the planetary boundaries [1], for example, which demonstrates the absolute interconnectedness and interdependence of humans and the biosphere—their essential oneness—is depressing, as is the anthropocentrism [2] of the attitudes embedded in some moral philosophical positions.
Let’s consider the example given in the Metanormative Method supplement to the Rethink Priorities’ Charitable Resource Allocation Frameworks and Tools Sequence (the CRAFT Sequence), which is blind to the ecological perspective:
“An example of moral uncertainty
Take the following scenario. A rural village has a growing human population that it is struggling to feed, so it wants to expand its grazing territory into the adjacent countryside. However, the village abuts a forest that is home to an endangered endemic species of monkeys that doesn’t have suitable habitat elsewhere. If the forest is razed, the monkeys will starve. However, a greater number of humans will be fully nourished. If the forest is not razed, then many villagers will face nutritional deficiencies, leading to serious health problems and possible death. You are tasked with deciding what should be done with the forest. You are morally uncertain, assigning some credence to each of the following worldviews, which give very different recommendations about what you ought to do:
Species-neutral justice: The welfare of all individuals matters equally, regardless of species. Justice requires that we secure a minimal amount of welfare for every individual, not that we maximize the overall or average welfare. Recommendation: preserve the monkeys’ habitat because it is necessary for them to live.
Species-neutral utilitarianism: The welfare of all individuals matters equally, regardless of species. The correct action is the one that maximizes overall welfare, even if it requires sacrificing the interests of some individuals. Recommendation: raze the forest because it will result in greater overall welfare.
Humans-only prioritarianism: Human welfare matters much more than monkey welfare. The correct action is the one that has the best overall consequences for welfare, where the welfare of the worst off is given extra weight. Recommendation: raze the forest because that will save humans, and the interests of the monkeys are not morally important in comparison.”
https://docs.google.com/document/d/1pOzOpVxGVSoGW6n4h-BoFqrfzOQAoVj8hzk_VQf8dfA/edit
Can you see the problem here?
No solution or moral theory is offered which takes into account the planetary boundaries and which acknowledges that that which is good for the planet is good for all of us. Razing the forest may provide a short term solution for that particular tribe’s needs but since it undermines the global commons—the forests which are necessary to create the very air we breathe and to regulate the hydrological systems, prevent desertification, and preserve the biodiversity, the web of life in which we are all held—it is ultimately unacceptable because it would lead to the death of all humans and all life on earth if pushed to the extreme.
The example given also does not offer the solution of the tribe learning to restrict its population so that it can live in harmony with the monkeys in their forest.
Any moral philosophy which is anthropocentric, i.e. which does not acknowledge the essential oneness of humanity with nature, the fact that we are all in this together, is no better morally than religions that tell humans that they are the pinnacle of creation and should go forth and multiply and rule over the Earth.
Yes, it’s that bad.
Effective altruists who fail to acknowledge environmental science and the need to protect the global commons, who put human needs above all others, are essentially like fundamentalist Christians. Examples like these show that effective altruism is out of touch with the existential risks caused by its anthropocentrism. One might as well call it EAA—Effective Anthropocentric Altruism—except that anthropocentrism is, in the long term, ineffective, rather than effective. It keeps humanity on our current trajectory, hurtling towards the precipice of extinction.
In my view, the EA movement will die unless it acknowledges these shortcomings and fully embraces environmentalism. But there may be hope for a reformed kind of effective altruism to supplant its current anthropocentric phase: Effective Ecocentric Altruism, or EEA.
Yes. I could live with that.
Literally.
So, in a nutshell:
Anthropocentric = Death/Existential Risk-Precipitating = Ineffective
but
Ecocentric = Life-Sustaining = Effective
As I stated in a previous post, there are no altruists on a dead planet. So let this mark the end of the era of Ineffective Anthropocentric Altruism! And let the the era of Effective Ecocentric Altruism begin!
References + Abstracts
[1] Earth beyond six of nine planetary boundaries
KATHERINE RICHARDSON HTTPS://ORCID.ORG/0000-0003-3785-2787 , WILL STEFFEN, [...], AND JOHAN ROCKSTRÖM HTTPS://ORCID.ORG/0000-0001-8988-2983+26 authorsAuthors Info & Affiliations
SCIENCE ADVANCES 13 Sep 2023 Vol 9, Issue 37 DOI: 10.1126/sciadv.adh2458
Abstract
This planetary boundaries framework update finds that six of the nine boundaries are transgressed, suggesting that Earth is now well outside of the safe operating space for humanity. Ocean acidification is close to being breached, while aerosol loading regionally exceeds the boundary. Stratospheric ozone levels have slightly recovered. The transgression level has increased for all boundaries earlier identified as overstepped. As primary production drives Earth system biosphere functions, human appropriation of net primary production is proposed as a control variable for functional biosphere integrity. This boundary is also transgressed. Earth system modeling of different levels of the transgression of the climate and land system change boundaries illustrates that these anthropogenic impacts on Earth system must be considered in a systemic context.
https://www.science.org/doi/10.1126/sciadv.adh2458
[2] The Anthropocentric Ontology of International Environmental Law and the Sustainable Development Goals: Towards an Ecocentric Rule of Law in the Anthropocene In: Global Journal of Comparative Law Volume 7 Issue 1 (2018) Authors: Louis J. Kotzé and Duncan French
Abstract
In this article we argue that the Anthropocene’s deepening socio-ecological crisis amplifies demands on, and exposes the deficiencies of, our ailing regulatory institutions, including that of international environmental law (iel). Many of the perceived failures of iel have been attributed to the anthropocentric, as opposed to the ecocentric, ontology of this body of law. As a result of its anthropocentric orientation and the resultant deficiencies, iel is unable to halt the type of human behaviour that is causing the Anthropocene, while it exacerbates environmental destruction, gender and class inequalities, growing inter- and intra-species hierarchies, human rights abuses, and socio-economic and ecological injustices. These are the same types of concerns that the recently proclaimed Sustainable Development Goals (sdgs) set out to address. The sdgs are, however, themselves anthropocentric; an unfortunate situation which reinforces the anthropocentrism of iel and vice versa. Considering the anthropocentric genesis of iel and the broader sdgs framework, this article sets out to argue that the anthropocentrism inherent in the ontological orientation of iel and the sdgs risks exacerbating Anthropocene-like events, and a more ecocentric orientation for both is urgently required to enable a more ecocentric rule of law to better mediate the human-environment interface in the Anthropocene. Our point of departure is that respect for ecological limits is the only way in which humankind, acting as principal global agents of care, will be able to ensure a sustainable future for human and non-human constituents of the Earth community. Correspondingly, the rule of law must also come to reflect such imperatives.
https://brill.com/view/journals/gjcl/7/1/article-p5_5.xml
We appreciate your perspective; it provides us with a chance to clarify our goals. The case you refer to was intended as an example of the ways in which normative uncertainty matters and we did not mean for the views there accurately model real-world moral dilemmas or the span of reasonable responses to them.
However, you might also object that we don’t really make it possible to incorporate the intrinsic valuing of natural environments in our moral parliament tool. Some might see this as an oversight. Others might be concerned about other missing subjects of human concern: respect for God, proper veneration of our ancestors, aesthetic value, etc. We didn’t design the tool to encompass the full range of human values, but to reflect the major components of the values of the EA community (which is predominantly consequentialist and utilitarian). It is beyond the scope of this project to assess whether those values should be exhaustive. That said, we don’t think strict attachment to the values in the tool are necessary for deriving insights from it, and we think it models approaches to normative uncertainty well even if it doesn’t capture the full range of the subjects of human normative uncertainty.
Have expanded this comment and turned it into a forum post:
https://forum.effectivealtruism.org/posts/FiZCpQrA9SYCntwDQ/anthropocentric-altruism-is-ineffective-the-ea-movement-must