I’m currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.
Jakob Lohmar
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a ‘global’ perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I’m wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don’t want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, …) through this work that would help them doing even better in the future.
Does he explicitely reject some EA ideas (e.g. longtermism) and does he give arguments against them? If not, it seems a bit odd to me to promote a new school that is like EA in most other important respects. It might be good to have this school additionally anyway, but it feels like its relation to EA and what its additional value might be are obvious questions that should be addressed.
I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I’d say, the answer is ‘no’. Creating her was the right decision.
I like your analysis of the situation as a prisoner’s dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one’s affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one’s EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!
I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more ‘inside-directed’ effects that directly affect the community and not only via how it seems to outsiders?
I wonder which of these things would have happened (in a similar way) without any EA contribution, and how much longer it would have taken until they would have happened. (In MacAskill’s sense: how contingent were these events?) I don’t have great answers, but it’s an important question to keep in mind.
- Jan 9, 2024, 9:55 AM; 6 points) 's comment on EA Wins 2023 by (
Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make “all else equal” to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem.
The problem (often called the “statistical lives problem”) is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I’d say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Hmm I can’t recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto—which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it’s necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone’s ex ante interest and still not justified, right?
Thanks for your interest! I will let you know when my paper is ready/readable. Maybe I’m also going to write a forum post about it.
Yes, that’s another problem indeed—thanks for the addition! Johann Frick (“Contractualism and Social Risk”) offers a “decomposition test” as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this “stage-wise ex ante contractualism” has its own additional problems.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the “ex post” view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Thanks for your helpful reply! I’m very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don’t deserve more than negligible weight—which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a “knock-down argument” as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of “statistical” people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of “statistical” people—and at least to me this is just “clearly wrong”. I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I’m afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
Hey Bob, I’m currently working on a paper about a similar issue, so this has been quite interesting to read! (I’m discussing more generally the implications of limited aggregation, but as you note contractualism has primarily distinct implications because of its (partially) non-aggregative nature.) While I mostly agree with your claims about the implications of the ex ante view, I disagree with your claim that this is the most plausible version of contractualism. In fact, I think that the ex ante view is clearly wrong and we should not be much concerned with what it implies.
First, briefly to the application part. I think you are right that, given the ex ante view, we should not focus on mitigating x-risks, and that we should rather perform global health interventions. However, as you note, there is usually a very large group of potential beneficiaries when it comes to global health interventions, so that the probability for each individual to be benefited is quite small, resulting in heavily diminished ex ante claims. I wonder, therefore, if we shouldn’t, on the ex ante view, rather spend our resources on (relatively needy) people we know or people in small communities. Even if these people would benefit from our resources 100+ times less than the global poor, this could well be overcompensated by the much higher probabilities for each of these individuals to actually be benefited.
But again, I think the ex ante view is clearly false anyway. The easiest way to see this is that the view implies that we should prioritize one identified person over any number of “statistical” people. That is: on the ex ante view, we should save a given person for sure rather than (definitely!) save one million people if these are randondomly chosen from a sufficiently large population. In fact, there are even worse implications (the identified person could merely lose a finger and not her life if we don’t help), but I think this implication is already bad enough to confidently reject the view. I don’t know of anybody who is willing to accept that implication. The typical (if not universal?) reaction of advocates of the ex ante view is to go pluralist and claim that the verdicts of the ex ante view only correspond to one of several pro tanto reasons. As far as I know, no such view has actually been developed and I think any such view would be highly implausible as well; but even if it succeeded, its implications would be much more moderate: all we’d learn is that there is one of several pro tanto reasons that favour acting in (presumably) some short-term way. This could be well compatible with classic long-term interventions being overall most choiceworthy / obligatory.
I’m sure that I’m not telling you much, if anything, new here, so I wonder what you think of these arguments?
- Oct 15, 2023, 1:55 AM; 54 points) 's comment on If Contractualism, Then AMF by (
I’d say that critically examining arguments in cause prioritization is an important part of doing cause prioritization. Just as examining philosophical arguments of others is part of doing philosophy. At least, reviewing and judging arguments does not amount to deferring—which is what the post seems mainly concerned about. Perhaps there is actually no disagreement?
Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done—and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently!
To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)
Interesting! I recently decided not to comment on a post that is not even a month old because there was no discussion about it anymore (and hadn’t been even just a few days after it was posted) and I thought that my comment would be read by barely anyone. Not saying that this is generally a reasonable decision even in the current system, but sometimes it is even when one has something interesting to say… Maybe it’s worth writing a longer post about this?
Hi everyone! I’m a DPhil philosophy student at Oxford University where I write a dissertation on Longtermism (see my bio for some more info). I’ve been involved with EA for several years and just decided to (finally!) create a forum account. Looking forward to commenting on some of the great forum posts instead of just reading them as I did so far :)
Wasn’t sure whether to write a bio because many users seem not to have one. Your post convinced me to write a bio anyway. Thanks!
Yeah framed like this, I like their decision best. In the important sense, you could say, they are still cause-neutral. It’s just that their cause-neutral evaluation now came to a very specific result: all the most cost-effective careers choices are in (or related to) AI. If this indeed the whole motivation for the strategic shift of 80k, I would have liked this post to use this framing more directly: “we have updated our beliefs on the most impactful careers” rather than “we have made strategic shifts” as the headline. It wasn’t clear to me whether the latter is a consequence of only the former, on my first reading.