I felt that I absorbed something helpful from this conversation that I hope will make me better at introducing EA ideas. Is there a list of other examples of especially effective EA communication that would-be evangelists could learn from? I’m especially interested in conversations in which someone experienced with EA ideas discusses them with someone newer, as I feel that this stage can be especially tricky and important.
For example, here are two other conversations that come to mind that I felt I absorbed something helpful from with respect to introducing EA ideas:
The recent 80k interview with Bob Wright, in particular from 2:47-12:52. Specifically, I was impressed with how Rob managed to give a fairly broad and well-balanced sketch of EA without getting sidetracked by Wright’s somewhat off-center questions. Overall this was one of my less favorite 80k episodes, but I think these 10 minutes are worth listening to for those interested in communicating about EA.
Ben Todd and Arden Koehler discussing the core idea of EA. A little more meta in the sense that Ben and Arden are both experienced with EA, but many parts of this still seem relevant in this regard.
If a list like this doesn’t exist, I want to make it exist—open to suggestions on the best way to do that. (E.g. should I post this as a question or top-level post?)
See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching:
The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions.
Questions for Phil (or others who broadly agree):
On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won’t advocate for it, or mostly because you think that they’ll fail to act on it or persuade others?
On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology? (Confidence that I’m interpreting Phil correctly here: Low.)
If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?
P.S. - If you’re feeling dissuaded from checking out Phil’s arguments because they are labeled as a ‘book’, and books are long, don’t be—it’s a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I’m also not endorsing the book’s quality.