See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching:
The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions.
Questions for Phil (or others who broadly agree):
On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won’t advocate for it, or mostly because you think that they’ll fail to act on it or persuade others?
On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology? (Confidence that I’m interpreting Phil correctly here: Low.)
If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?
P.S. - If you’re feeling dissuaded from checking out Phil’s arguments because they are labeled as a ‘book’, and books are long, don’t be—it’s a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I’m also not endorsing the book’s quality.
See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching:
The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions.
Questions for Phil (or others who broadly agree):
On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won’t advocate for it, or mostly because you think that they’ll fail to act on it or persuade others?
On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology? (Confidence that I’m interpreting Phil correctly here: Low.)
If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?
P.S. - If you’re feeling dissuaded from checking out Phil’s arguments because they are labeled as a ‘book’, and books are long, don’t be—it’s a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I’m also not endorsing the book’s quality.