It feels odd to specify an exact year EA (or any movement) was ‘founded’. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn’t exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly ‘EA orgs than others’ - but that’s kind of the point.
I’d be wary of including ‘moral offsetting’ as an EA idea. It’s fairly controversial, and sounds like the sort of thing that could turn people off the other ideas
Agree with others that overusing the word ‘utilitarianism’ seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).
Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well
I’d be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that’s received a lot of publicity in the EA community, but that’s far from universally accepted even by eg utilitarians and EAs
On slide 18 it seems odd to have an ‘other’ category on the right, but omit it on the left with a tiny ‘clothing’ category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with ‘other’ - which I think would make the graph clearer
I also find the colours on the same graph a bit too similar—my brain keeps telling me that ‘farm’ is the second biggest categorical recipient when I glance it it, for eg
I haven’t read the Marino paper and now want to, ’cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it’s becoming rapidly less defensible to believe that they don’t experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn’t update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there’s no defensible reason to assume they don’t)
Slide 20 ‘human’ should be pluralised
Slide 22 ‘important’ and ‘unimportant’ seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) ‘causes large magnitude of suffering’, ‘causes comparatively small magnitude of suffering’
I don’t understand the phrase ‘aestivatable future light-cone’. What’s aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you’re getting at)
I would change ‘the species would survive’ on slide 25 to ‘would probably survive’, and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I’d be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an ‘X-risk’, esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)
Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century’s delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds—it’s slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur’s claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)
Re slide 29, I think EA has long stopped being ‘mostly moral philosophers & computer scientists’ if it ever strictly was, although they’re obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it’s not clear whether it’s a boast of a great status quo or a call to arms of a need for change
I would say EA needs more money and talent—there are still tonnes of underfunded projects!
You write, “Agree with others that overusing the word ‘utilitarianism’ seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).”
One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.
Now, this in itself does not mean that effective altruism believes that it makes sense to
sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and
say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)
(Note that 2. assumes the intelligibility of 1.; see below)
The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see “Saving Lives, Moral Theory, and the Claims of Individual” (Otsuka, 2006) However, I’m not aware that effective altruism why it’s better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that “effective altruism starts from the position that it’s better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy.” If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).
Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.
A bit more on utilitarianism:
Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.
To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.
From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people’s pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people’s minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.
Great stuff! A few quibbles:
It feels odd to specify an exact year EA (or any movement) was ‘founded’. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn’t exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly ‘EA orgs than others’ - but that’s kind of the point.
I’d be wary of including ‘moral offsetting’ as an EA idea. It’s fairly controversial, and sounds like the sort of thing that could turn people off the other ideas
Agree with others that overusing the word ‘utilitarianism’ seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).
Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well
I’d be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that’s received a lot of publicity in the EA community, but that’s far from universally accepted even by eg utilitarians and EAs
On slide 18 it seems odd to have an ‘other’ category on the right, but omit it on the left with a tiny ‘clothing’ category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with ‘other’ - which I think would make the graph clearer
I also find the colours on the same graph a bit too similar—my brain keeps telling me that ‘farm’ is the second biggest categorical recipient when I glance it it, for eg
I haven’t read the Marino paper and now want to, ’cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it’s becoming rapidly less defensible to believe that they don’t experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn’t update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there’s no defensible reason to assume they don’t)
Slide 20 ‘human’ should be pluralised
Slide 22 ‘important’ and ‘unimportant’ seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) ‘causes large magnitude of suffering’, ‘causes comparatively small magnitude of suffering’
I don’t understand the phrase ‘aestivatable future light-cone’. What’s aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you’re getting at)
I would change ‘the species would survive’ on slide 25 to ‘would probably survive’, and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I’d be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an ‘X-risk’, esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)
Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century’s delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds—it’s slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur’s claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)
Re slide 29, I think EA has long stopped being ‘mostly moral philosophers & computer scientists’ if it ever strictly was, although they’re obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it’s not clear whether it’s a boast of a great status quo or a call to arms of a need for change
I would say EA needs more money and talent—there are still tonnes of underfunded projects!
You write, “Agree with others that overusing the word ‘utilitarianism’ seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).”
One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.
Now, this in itself does not mean that effective altruism believes that it makes sense to
sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and
say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)
(Note that 2. assumes the intelligibility of 1.; see below)
The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see “Saving Lives, Moral Theory, and the Claims of Individual” (Otsuka, 2006) However, I’m not aware that effective altruism why it’s better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that “effective altruism starts from the position that it’s better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy.” If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).
Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.
A bit more on utilitarianism: Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.
To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.
From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people’s pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people’s minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.