Help me find the crux between EA/​XR and Progress Studies

I’m trying to get to the crux of the differences between the progress studies (PS) and the EA /​ existential risk (XR) communities. I’d love input from you all on my questions below.

The road trip metaphor

Let me set up a metaphor to frame the issue:

Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/​XR agree that the trip is good, and that as long as we don’t crash, faster would be better. But:

  • XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.

  • PS thinks we’re already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that’s secondary.

(See also @Max_Daniel’s recent post)

My questions

Here are some things I don’t really understand about the XR position (granted that I haven’t read the literature on it extensively yet, but I have read a number of the foundational papers).

(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)

How does XR weigh costs and benefits?

Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom’s hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.

For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you’re willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you’ve fallen for a Pascal’s Mugging.

Eliezer has specifically said that he doesn’t accept Pascal’s Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I’ve seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I’m fine with. I don’t see how you get to “so we shouldn’t try to speed up technological progress.”

Does XR consider tech progress default-good or default-bad?

My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.

When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).

What would moral/​social progress actually look like?

This idea that it’s more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I’m not sure at all that what I have in mind there corresponds to what EA/​XR folks are thinking. Maybe this has been written up somewhere, and I haven’t found it yet?

Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it’s unclear how we could ever reduce it enough, because of (1).

What does XR think about the large numbers of people who don’t appreciate progress, or actively oppose it?

Returning to the road trip metaphor: while PS and EA/​XR debate the ideal balance of resources towards steering vs. acceleration, and which is more neglected, there are other passengers in the car. Many are yelling to just slow down, and some are even saying to turn around and go backwards. A few, full of revolutionary zeal, are trying to jump up and seize the steering wheel in order to accomplish this, while others are trying to sabotage the car itself. Before PS and EA/​XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.

This seems like a problem to me, especially in the context of (3): I don’t know how we make social progress, when this is what we have to work with. So a big part of progress studies is trying to just educate more people that the car is valuable and that forward is actually where we want to go. (But I don’t think anyone in EA/​XR sees it this way or is sympathetic to this line of reasoning, if only because I’ve never heard them discuss this faction of humanity at all or recognize it as a problem.)


Thank you all for your input here! I hope that understanding these issues better will help me finally answer @Benjamin_Todd’s question, which I am long overdue on addressing.