I’m currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.
Jakob Lohmar
One thing this suggests is that you might think of this slightly differently when you are asking “What is this activity like?” at the individual level vs asking “What prioritisation are we doing?” at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.
I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.
Thanks, David!
It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.
The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it’s plausible (although not obvious to me) that in fact it doesn’t make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end—it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.
So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will’s) and see if anything changes significantly.
Many thanks for your reply! These are great points and I think there is some truth to them, but here is a bit to push back against them (or I guess just against your first point).
But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They’re not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation.
But I think you could say something analogous for other CP work? For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of “Human Welfare”. So it seems unnecessary for genuine CP that a cause is compared to causes that cannot be categorized under the same super-ordinate cause. This would be too demanding as a condition for CP since you can (almost) always find a common super-ordinary cause for the compared (sub-)causes.
But if that is true, the fine-grainedness of the cause individuation does seem to make a difference to whether something counts as CP. For example, work on whether we should prioritize wild animals or farmed animals would then be genuine CP according to a cause individuation that includes ‘wild animals’ and ‘farmed animals’ but not according to your cause individuation which only includes ‘animals’ as a more general category. Maybe work that only compares ‘wild animals’ with ‘farmed animals’ but not with other causes seems strange, but the ultimate goal of this work could well be to find out what is the best cause overall. A conclusion on this could be reached by putting this work together with other work with a similar level of generality, such as work on whether to prioritize ‘farmed animals’ or ‘global poverty’.
As a concrete example, maybe it’s helpful to look at Will’s recent suggestion that EA should acknowledge as cause areas: AI safety, AI character, AI welfare / digital minds, the economic and political rights of AIs, AI-driven persuasion and epistemic disruption, AI for better reasoning, decision-making and coordination, and the risk of (AI-enabled) human coups. Now imagine someone does research on the comparative effectiveness of Will’s AI causes. Should we consider this CP or WCP? It seems it is CP relative to Will’s cause individuation but WCP relative to the cause individuation that summarizes all of these under ‘AI’.
This is a classic case of a surprising and suspicious convergence
Not sure if this distinction is made in the original post, but I’d say that the convergence in this case is not surprising since there is a fairly obvious explanation for it, but it is all the more suspicious since the alternative explanation for doing “entertainment for EAs” is the immediate fun and recognition one gets from it (rather than that one merely has an emotional connection to the cause).
Overall, I guess it’s good to have an “entertainment for EAs detector” that is not very sensitive, so that it only goes off when big amounts of resources are at stake. E.g. not when it’s about writing a fun post or buying pizza for attendees but when it’s about… buying an abbey.
I’m late to the party but would still be interested what you think of this: cause areas can be individuated in more or less fine-grained ways. For example, we could consider ‘animal welfare’ one cause area or ‘wild animal welfare’ and ‘farmed animal welfare’ two cause areas, and again we could individuate more fine-grainedly between ‘wild invertebrate welfare’ and ‘wild vertebrate welfare’ and so on. I think that you might even end up with (what is intuitively thought of as) interventions at some point by making causes more and more fine-grained. If so, there is no fundamental difference between ‘causes’ and ‘interventions’.
Now that is not to say that distinguishing between causes and interventions is not useful, and some cause/intervention individuations are certainly more intuitive than others. But if there are several permissible/useful/intuitive ways of individuation them, you might get a different picture of the resource allocation between CP and WCP (and indeed also CCP). Generally I think that the more fine-grained causes are individuated, the more work will count as CP rather then WCP. Conversely, if you individuate causes in a very coarse-grained way, it is unsurprising that most prioritization work will count as ‘within a cause’. In the extreme case where you only consider a single all-encompassing cause, all prioritization will necessarily be within that cause. If you distinguish only between two causes (say, human and non-human welfare), there can be genuine CP—namely between these two causes—but it still woulnd’t be surprising if most prio-work falls within one of these two causes and therefore counts as WCP. Now you distinguish between three causes. That is not unusual in EA but still very coarse-grained and I think you could sensibly distinguish instead between, say, 10 cause areas or so. Would this affect the result of your analysis such that more prio work would count as CP?
If there are so many new promising causes that EA should plausibly focus on (from something like 5 to something like 10 or 20), cause-prio between these causes (and ideas for related but distinct causes) should be especially valuable as well. I think Will agrees with this—after all his post is based on exactly this kind of work! - but the emphasis in this post seemed to be that EA should invest more into these new cause areas rather than investigate further which of them are the most promising—and which aren’t that promising after all. It would be surprising if we couldn’t still learn much more about their expected impact.
Hey Kritika, great work! I must admit that I didn’t read all passages carefully yet, but here are some high-level thoughts that immediately came to mind.
The requirements for Institutional Longtermism that you suggest seem to me like desiderata from a (purely) longtermist perspective, but I don’t see why they should be considered requirements? For example, you suggest that it is a requirement that core long-term policies can only be modified by a supermajority of e.g. 90%. This may be desirable from a longtermist perspective, but long-term policies that can be modified by a smaller supermajority or even just a majority vote would still be valuable from a longtermist perspective.
This seems analogous to other causes, such as animal welfare. From a purely animal welfare perspective, it may be desirable to have animal welfare policies that cannot be modified by even 90% of voters, and so on. But that doesn’t mean that animal welfare is incompatible with democracy?
I guess you see the difference between the longtermist cause and other causes to be in longtermism’s demands: we should design institutions without exception such that they are optimized for the long-term future because the long-term future matter that incredibly much. But that would be a very extreme form of longtermism. Even Greaves’ and MacAskill’s Strong Longtermism only makes claims about what we should do / what is best to do on the margin. It doesn’t say that we should spend all (or even 50% of) our resources on the long-term future. Similarly, Institutional (even Strong) Longtermism could merely claim that a fraction of public resources should be spent on the long term. Let’s say that’s 10%. Then, decisions about the remaining 90% of public resources could be made based on democratic procedures.
Finally, I think it’s even desirable from a longtermist perspective to leave important political decisions in the hands of future people: they probably know better how to improve the long term (e.g. because of improved forecasting).
That seems like a strange combination indeed! I will need to think more about this...
and perhaps under-rewarded given it is less exciting.
...especially so in academia! I’d say that in philosophy mediocre new ideas are more publishable than good objections.
Yeah that makes sense to me. I still think that one doesn’t need to be conceptually confused (even though this is probably a common source of disagreement) to believe both that (i) one action’s outcome is preferable to the other action’s outcome even though (ii) one ought to perform the latter action. For example, one might think the former outcome is overall preferable because it has much better consequences. But conceptual possibility aside, I agree that this is a weird view to have. At the very least, it seems that all else equal one should prefer the outcome of the action that one takes to be the most choiceworthy. Not sure if it has some plausibility to say that this doesn’t necessarily hold if other things are not equal—such as in the case where the other action has the better consequences.
Thanks—also for the link! I like your notion of preferability and the analysis of competing moral theories in terms of this notion. What makes me somewhat hesitant is that the objects of preferability, in your sense, seem to be outcomes or possible worlds rather that the to-be-evaluated actions themselves? If so, I wonder if one could push back against your account by insisting that the choiceworthiness of available acts is not necessarily a function of the preferability of their outcomes since… not all morally relevant features of an action are necessarily fully reflected in the preferability of its outcome?
But assuming that they are, I guess that non-consequentialists who reject full aggregation would say that the in-aggregate larger good is not necessarily preferable. But I’m not sure. I agree that this seems not very intutive.
I couldn’t agree more. Moral philosophers tend to distinguish the ‘axiological’ from the ‘deontic’ and then interpret ‘deontic’ in a very narrow way, which leaves out many other (in my opinion: more interesting) normative questions. This is epistemically detrimental, especially when combined with the misconception that ‘axiology is only for consequentialists’. It invites flawed reasoning of the kind: “consideration X may be important for axiology but since we’re not consequentialists, that doesn’t really matter to us, and surely X doesn’t *oblige* us to act in a certain way (that would be far too demanding!), so we don’t need to bother with X”.
That said, I think there is still a good objection to the stakes-sensitivity principle, which is from Andreas Mogensen: full aggregation is true when it comes to axiology (‘the stakes’), but it arguably isn’t true with regard to choiceworthiness/reasons. Hence, it could be that an action has superb consequences, but that only gives us relatively weak reason to perform the action. That reason may not be strong enough to outweigh other non-consequentialist considerations such as constraints.
That’s an interesting take! I have a lots of thoughts on this (maybe I will add other comments later), but here is the most general one: One thing is to create new ideas, another thing is to assess their plausibility. You seem to focus a lot on the former—most of your examples for valuable insights are new ideas rather than objections or critical appraisals. But testing and critically discussing ideas is valuable too. Without such work, there would be an overabundance of ideas without separation between the good and bad ones. I think the value of many essays in this volume stems from doing this kind of work. They address an already existing promising idea—longtermism—and assess its plausibility and importance.
I think that most longtermists are aware of the motivational challenge you point out. In fact, major works on longtermism address this challenge, such as Toby Ord’s “The Precipice”, which argues for the importance of mitigating existential risks from a wide range of moral views. Since the motivational challenge is already understood, I think that the most valuable part of this post are the final paragraphs that sketch how the motivational challenge could be overcome. Like Toby, I’d encourage you to further develop these ideas of yours—especially since they seem to come apart from moral philosophy’s obsession with the question whether we are ‘required’ or ‘obligated’ to do the right/best thing.
Yeah framed like this, I like their decision best. In the important sense, you could say, they are still cause-neutral. It’s just that their cause-neutral evaluation now came to a very specific result: all the most cost-effective careers choices are in (or related to) AI. If this indeed the whole motivation for the strategic shift of 80k, I would have liked this post to use this framing more directly: “we have updated our beliefs on the most impactful careers” rather than “we have made strategic shifts” as the headline. It wasn’t clear to me whether the latter is a consequence of only the former, on my first reading.
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a ‘global’ perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I’m wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don’t want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, …) through this work that would help them doing even better in the future.
Does he explicitely reject some EA ideas (e.g. longtermism) and does he give arguments against them? If not, it seems a bit odd to me to promote a new school that is like EA in most other important respects. It might be good to have this school additionally anyway, but it feels like its relation to EA and what its additional value might be are obvious questions that should be addressed.
I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I’d say, the answer is ‘no’. Creating her was the right decision.
Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn’t be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is ‘present people < farmed animals’ and another one is ‘farmed animals < wild animals’. From these, we could infer (by transitivity of “<”) that ‘present people < wild animals’, which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking—and hence the ‘best cause’ overall—is if there are several overlapping partial rankings.
(By the way, just in case you didn’t see that one, I had written this other reply to your previous comment—no need to answer it, but to make sure you didn’t overlook it since I wrote two separate replies.)