I suppose so. But if you don’t think the article provides new reasons to care less about avoiding the Repugnant Conclusion, then it doesn’t provide new reasons to focus on other moral problems more.
Thank you for your comments, Max and John. They inclined me to be quite a bit more favourable to the paper. I still have mixed feelings: while I respect the urge the move a stale conversation on, I don’t think the authors provide new object-level reasons to do so. They do provide a raw (implicit?) appeal for others, as their peers, to update in their direction, but I’m sceptical that’s what philosophy should involve.
When I first saw the paper, I thought “oh cool, how novel for philosophers to come together and say they agree on something, for once”. But then, as I reflected on it a couple of days later, I thought the publication was odd. After all, there’s not much in the way of argument, so the paper is really just a statement of opinion. As such, there is a problematic whiff of an appeal to authority and social pressure here: “oh, you think the repugnant conclusion is repugnant? But you shouldn’t, because all these smart people disagree with you. Just get with the progamme, okay?”
In general, I don’t see how papers which say (little more than) “We agree with X” merit publication. What would be the point of a paper which said, e.g. “We, some utilitarian philosophers, do not think the usual objections to utilitarianism succeed because of the usual counter-objections”? We already know that philosophers believe a variety of things.
TL;DR. I’m very substantially in agreement with Brian’s comment. I expand on those concerns, put them in stronger terms, then make a further point about how I’d like 80k to have more of a ‘public service broadcasting’ role. Because this is quite long, I thought it was better to have it as a new comment.
It strikes me as obviously inappropriate to describe the podcast series as “effective altruism: an introduction” when it focuses almost exclusively on a specific worldview—longtermism. The fact this objection is acknowledged, and that a “10 problems areas” series is also planned, doesn’t address it. In addition, and relatedly, it seems mistaken to produce and distribute such a narrow introduction to EA in the first place.
The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an ‘introduction’ when it focuses almost exclusively on object-level attempts to benefit just one group. At the very least, this does not seem to be in good faith when there is a substantial fraction of the EA community, and people who try to live by EA principles, who do prioritise each of three.
For people inside effective altruism who do not share 80k’s worldview, stating that this is an introduction runs the serious risk of conveying to those people that they are not “real EAs”, they are not welcome in the EA community, and their sincere and thoughtful labours and perspectives are unimportant. It does not seem adequately inclusive, welcoming, open-minded, and considerate—values EAs tend to endorse.
For people outside EA who are being introduced to the ideas for the first time, it genuinely fails to introduce them to the relevant possibilities of how they might do the most good, leaving them with a misleading impression of what EA is or can be. It would have been trivially easy to include the Bollard and Glennister interviews—or something else to represent those who focus on animals or humans in the near-term – and so indicate that those are credible altruistic paths and enthuse those who might take them.
By analogy, if someone taught an “introduction to political ideologies” course which glossed over conservatism and liberalism to focus primarily on (the merits of) socialism, you would assume they were either incompetent or pushing an agenda. Either way, if you hoped that they would cover all the material and do so in an even-handed manner, you would be disappointed.
Given this podcast series is not an introduction to effective altruism, it should not be called “effective altruism: an introduction”. More apt might be “effective longtermism: an introduction” or “80k’s opinionated introduction to effective altruism” or “effective altruism: 80k’s perspective”. In all cases, there should be more generous signposting of what the other points of view are and where they could be found.
A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good. I struggle to see why someone would produce such a narrow introduction unless they thought those holding alternative views were errant and irrelevant fools.
I can imagine someone defending 80k by saying that this is their introduction to effective altruism and there’s nothing to stop someone else writing their own and sharing it (note RobBesinger does this below).
While this is technically true, I do not find it compelling for the following reason. In a cooperative altruistic community, you want to have a division, rather than a duplication, of labour, where people specialise in different tasks. 80k has become, in practice, the primary source of introductory materials to EA: it is the single biggest channel by which people are introduced to effective altruism, with 17% of EA survey respondents saying they first heard about EA through it; it produces much of the introductory content individuals read or listen to. 80k may not have a monopoly on telling people about EA, but it is something like the ‘market leader’.
The way I see it, given 80k’s dominant position, they should fulfil something like a public service broadcasting role for EA, where they strive to be impartial, inclusive, and informative (https://en.wikipedia.org/wiki/Public_broadcasting).
Why? Because they are much better placed to do it than anyone else! In terms any 80k reader will be familiar with, 80k should do this because it is their comparative advantage and they are not easily replaced. Their move to focusing on longtermism has left a gap. A new organisation, Probably Good, has recently stepped into this gap to provide more cause neutral careers advice but I see it as cause for regret that this had to happen.
While I think it would be a good idea if 80k had more of a public service broadcasting model, I don’t expect this to happen, seeing as they’ve consciously moved away from it. It does, however, seem feasible for 80k to be a bit more inclusive—in this case, one very easy thing would be to expand the list from 10 to 12 items so concerns for animals and near-term humans feature. It would be a huge help to non-longtermist EAs that 80ks talks about them a bit (more), and it would be a small additional cost to 80k.
I want to focus on the following because it seems to be a problematic misunderstanding:
“1. Temporal position should not impact ethics (hence longtermism)”
This genuinely does seem to be a common view in EA, namely, that when someone exists doesn’t (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.
The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won’t say if these objections are, all things considered, plausible, I’ll merely set out what they are.
First, there is the epistemic objection to longtermism (sometimes called the ‘tractability’, ‘washing-out’, or ‘cluelessness’ objection) that, in short, we can’t be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments. Note this has nothing to do with different values of people due to time.
Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What’s the justification for this implication? One justification could be ‘presentism’, the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.
An alternative justification, which does not rely on temporal position in itself, is ‘necessitarianism’, the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone (‘person-affecting restriction’) and (2) existence is not comparable to non-existence for someone (‘non-comparativism’). In short, it isn’t better to create lives, because it’s not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)
The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn’t better for either the people that would have existed, or the people that will actually exist. This is known as the ‘non-identity problem’. Necessitarians might explain that, although we really want to help (far) future people, we simply can’t. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees’ lives go better—only sentient entities can have well-being.)
Note, crucially, this has nothing to do with temporal position in itself either. It’s the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn’t matter in itself).
*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don’t want to get into this here as this is already long enough.
Ha. I like this name.
While I’m writing, I’ll mention I seriously proposed calling HLI the Bentham Institute for Global Happiness (BIGHAP), but it was put to an internal vote and I, tragically, lost. I am fairly confident not calling it BIGHAP will be my biggest deathbed regret.
Pablo could you, or perhaps some other kind forum reader, provide a brief explanation of what they actually do? The abstract more-or-less says ‘we solve a problem’, but it’s unclear exactly how they solve the problem—I have no intuitive purchase on what “more inclusive formalizations” means—so don’t know whether it’s a good use of time to read the paper.
I’d like to know what the Happier Lives Institute should be; we never liked the name anyway.
ah, this is great. evidence the selectors could tell the top 2% from the rest, but 2%-20% was much of a muchness. Shame that it doesn’t give any more information on ‘commercial success’.
I’m not sure how to assess what counts as ‘core EA’! But I don’t think the org bills itself as EA, or that the overwhelming majority of its staff self-identify as EAs (cf. the way the staff at, um, CEA probably do...)
Short answer: Yes. FWIW, Partha is the Chair of CSER (Centre for the Study of Existential Risk) which has, or has had, quite a few EA-sympathetic people in it. I have no idea how widely he is known across EA more broadly.
I’m not trying to be obtuse, it wasn’t super clear to me on a quick-ish skim; maybe if I’d paid more attention I’ve have clocked it.
Yup, I was too hasty on VCs. It seems like they are pretty confident they know what the top >5% are, but not that can say anything more precise than. (Although I wonder what evidence indicates they can reliably tell the top 5% from those below, rather than they just think they can).
I was thinking the emphasis on outputs might be the important part as those are more controllable than outcomes, and so the decision-relevant bit, even though we want to maximise impartial value (outcomes).
I can imagine someone thinking the following way: “we must find and fund the best scientists because they have such outsized outcomes, in terms of citations.” But that might be naive if it’s really just the top scientist who gets the citations and the work of all the good scientists has a more or less equal contribution to impartial value.
FWIW, it’s not clear we’re disagreeing!
Okay good! Yeah, I would be curious to see how much it changed the analysis distinguishing outputs from outcomes and, further, between different types of outputs.
Yeah, I’d be interested to know if VC were better than chance. Not quite sure how you would assess this, but probably someone’s tried.
But here’s where it seems relevant. If you want to pick the top 1% of people, as they provide so much of the value, but you can only pick the top 10%, then your efforts to pick are much less cost-effective and you would likely want to rethink how you did it.
I was going to raise a similar comment to what others have said here. I hope this adds something.
I think we need to distinguish quality and quantity of ‘output’ from ‘success’ (the outcome of their output). I am deliberately not using ‘performance’ as it’s unclear, in common language, which one of the two it refers to. Various outputs are sometimes very reproducible—anyone can listen to a music track, or read an academic paper. There are often huge rewards to being the best vs second best—eg winning in sports. And sometimes success generates further success (the ‘Matthew effect’) - more people want to work with you, etc. Hence, I don’t find it all weird to think that small differences in outputs, as measured on some cardinal scale, sometimes generate huge differences in outcomes.
I’m not sure exactly what follows from this. I’m a bit worried you’re concentrated on the wrong metric—success—when it’s outputs that are more important. Can you explain why you focus on outcomes?
Let’s say you’re thinking about funding research. How much does it matter to fund the best person? I mean, they will get most of the credit, but if you fund the less-than-best, that person’s work is probably not much worse and ends up being used by the best person anyway. If the best person gets 1,000 more citations, should you be prepared to spend 1,000 more to fund their work? Not obviously.
I’m suspicious you can do a good job of predicting ex ante outcomes. After all, that’s what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.
It might be interesting to investigate differences in quality and quantity of outputs separately. Intuitively, it seems the best people do produce lots more work than the good people, but it’s less obvious the quality of the best people is much higher than of the good. I recognise all these terms are vague.
Thanks very much for writing this it. I’d started to wonder about the same idea but this is a much better and clearer analysis than I could have done! A few questions as I try to get my head around this.
Could you say more about why the predictability trends towards zero? It’s intuitive that it does, but I’m not sure I can explain that intuition. Something like: we should have a uniform prior over the actual value of the action at very distant periods of time, right? An alternative assumption would be that the action has a continuous stream of benefits in perpetuity. I’m not sure how reasonable that is. Or is it the inclusion of counterfactuals, i.e. if that you didn’t do that good thing, someone else would be right behind you anyway?
Regarding ‘attractor states’, is the thought then that we shouldn’t have a uniform prior regarding what happens to those in the long run?
I’m wondering if the same analysis can be applied to actions as to the ‘business as usual’ trajectory of the future, i.e. where we don’t intervene. Many people seem to think it’s clear that the future, if it happens, will be good, and that we shouldn’t discount it to/towards zero.
I think this article and/or this excerpt of it, would be improved by an explanation of how derivatives work.
Yup. I suspect Bader’s approach is ultimately ad hoc (I saw him present it at a conf and haven’t been through the paper closely) but I do like it.
On the second bit, I think that’s right with the A, A+ bit: the person-affector can see that letting them new people arrive and then redistributing to everyone is worse for the original people. So if you think that’s what will happen, you should avoid it. Much the same thing to say about the child.
Not sure I follow. Are you assuming anti-realism about metaethics or something? Even so, if your assessment of outcomes depends, at least in part, on how good/bad those outcomes are for people, the problem remains.