Done
Tyle_Stelzig
I felt that I absorbed something helpful from this conversation that I hope will make me better at introducing EA ideas. Is there a list of other examples of especially effective EA communication that would-be evangelists could learn from? I’m especially interested in conversations in which someone experienced with EA ideas discusses them with someone newer, as I feel that this stage can be especially tricky and important.
For example, here are two other conversations that come to mind that I felt I absorbed something helpful from with respect to introducing EA ideas:
The recent 80k interview with Bob Wright, in particular from 2:47-12:52. Specifically, I was impressed with how Rob managed to give a fairly broad and well-balanced sketch of EA without getting sidetracked by Wright’s somewhat off-center questions. Overall this was one of my less favorite 80k episodes, but I think these 10 minutes are worth listening to for those interested in communicating about EA.
Ben Todd and Arden Koehler discussing the core idea of EA. A little more meta in the sense that Ben and Arden are both experienced with EA, but many parts of this still seem relevant in this regard.
If a list like this doesn’t exist, I want to make it exist—open to suggestions on the best way to do that. (E.g. should I post this as a question or top-level post?)
Thanks for this recommendation! It caused me to listen to this episode, which I otherwise probably wouldn’t have done.
I agree with Ben that the outsize impact of this episode may largely be due to the amenability of Sam’s audience to EA ideas, but I also thought this was a fantastic conversation which does a great job introducing EA in a positive, balanced, and accurate way. I do feel I absorbed something from listening to Will here that will hopefully make me better at introducing EA ideas. I may also start recommending this episode as an introduction to EA for some people, though I don’t think it will be my main go-to for most people.
I’m also glad to have listened to this conversation merely out of appreciation for its role in bringing so many people to the movement. :)
See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching:
The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions.
Questions for Phil (or others who broadly agree):
On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won’t advocate for it, or mostly because you think that they’ll fail to act on it or persuade others?
On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology? (Confidence that I’m interpreting Phil correctly here: Low.)
If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?
P.S. - If you’re feeling dissuaded from checking out Phil’s arguments because they are labeled as a ‘book’, and books are long, don’t be—it’s a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I’m also not endorsing the book’s quality.
I upvoted Phil’s post, despite agreeing with almost all of AlexHT’s response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil’s book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor’s views (e.g. ‘look at these weird things that Nick Bostrom said once!’), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns.
Two aspects of the book that I interpreted somewhat differently than Alex:
The genocide argument that Alex expressed confusion about: I thought Phil’s concern was not that longtermism would merely consider genocide while evaluating options, but that it seems plausible to Phil that longtermism (or a future iteration of it encountering different facts) could endorse genocide—i.e. that Phil is worried about genocide as an output of longtermism’s decision process, not as an input. My model of Phil is that if he were confident that longtermism would always reject genocide, then he wouldn’t be concerned merely that such possibilities are evaluated. Confidence: Low/moderate.
The section describing utilitarianism: I read this section as merely aiming to describe an aspect of longtermism and to highlight features which might be wrong or counter-intuitive, not to actually make any arguments against the views he describes. This could explain Alex’s confusion about what was being argued for (nothing) and feeling that intuitions were just being thrown at him (yes). I think Phil’s purpose here is to lay the groundwork for his later argument that these ideas could be dangerous. The only argument I noticed against utilitarianism comes later—namely, that together with empirical beliefs about the possibility of a large future it leads to conclusions that Phil rejects. Confidence: Low.
I agree with Alex that the book was not clear on these points (among others), and I attribute our different readings to that lack of clarity. I’d certainly be happy to hear Phil’s take.
I have a couple of other thoughts that I will add in a separate comment.
In the technical information-theoretic sense, ‘information’ counts how many bits are required to convey a message. And bits describe proportional changes in the number of possibilities, not absolute changes. The first bit of information reduces 100 possibilities to 50, the second reduces 50 possibilities to 25, etc. So the bit that takes you from 100 possibilities to 50 is the same amount of information as the bit that takes you from 2 possibilities to 1.
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you’re reducing the number of possibilities by a factor of 10.
To take your example: If you were using two digits in base four to represent per-sixteenths, then each digit contains the 50% of the information (two bits each, reducing the space of possibilities by a factor of four). To take the example of per-thousandths: Each of the three digits contains a third of the information (3.3 bits each, reducing the space of possibilities by a factor of 10).
But upvoted for clearly expressing your disagreement. :)
The difference between carbon offsetting and meat offsetting is that carbon offsetting doesn’t involve causing harms, while meat offsetting does.
Most people would consider it immoral to murder someone for reasons of personal convenience, even if you make up for it by donating to a ‘murder offset’, such as, let’s say, a police department. MacAskill is saying that ‘animal murder’ offsetting is like this, because you are causing harm to animals, then attempting to ‘make up for it’ by helping other animals. Climate offsets are different because the offset prevents the harm from occurring in the first place.
Indeed, murder offsets would be okay from a purely consequentialist perspective. But this is not the trolley problem, for the reason that Telofy explains very well in his second paragraph above. Namely, the harmful act that you are tempted to commit is not required in order to achieve the good outcome.
Regarding your first paragraph: most people would consider it unethical to murder someone for reasons of personal convenience, even if you donated to a ‘murder offset’ organization such as, I don’t know, let’s say police departments. MacAskill is saying that ‘animal murder’ offsets are unethical in this same way. Namely, you are committing an immoral act—killing an animal—then saving some other animals to ‘make up for it’. Climate offsets are different because the harm is never caused in this case.
Regarding your last paragraph: This is a nice example, but it will fail if your company might modulate the amount of food that it buys in the future based on how much gets eaten. For example, if they consistently have a bunch of leftover chicken, they might try to save some money by purchasing less chicken next time. If this is possible, then there is a reason not to eat the free chicken.
Give a man a fish, it may rot in transport. Teach a man to fish, he may have other more practical skills already. Give a man cash and he can buy whatever is most useful for himself and his family.
(The idea is to highlight key benefits of cash in a way that also maps plausibly onto the fishing example. I’m sure the wording and examples here could be improved; suggestions welcome!)