I don’t think there’s a uniform “existential risk reduction movement” answer to what % of X-risk reduction people are willing to trade for large harms today, any more than there’s a uniformly “effective altruism movement” answer for what the limits of altruism ought to be. For me personally, all the realistic tradeoffs I make day to day point towards reducing X-risk being of overwhelming importance, aside from model uncertainty.
In practice, this means I try to spend my time thinking about longtermism stuff, with mixed success, aside from a) stuff that builds my own skills or other ways that I can be more effective, b) small cooperative actions that I think are unusually good to values systems otherwise fairly similar to my own, c) entertainment, and d) bullshit.
I think probability of existential risk is high enough that I’m not in practice too worried about Pascal’s mugging issues. In the moments I doubt whether EA longtermism is stuff I ought to be working on, I’m much more worried about issues in the class of “randomly or systematically deluding myself about the probabilities of specific outcomes I’m worried about.” Rather than issues akin to “my entire decision procedure is wrong because expected value calculations are dumb.”
Does XR consider tech progress default-good or default-bad?
Basically my view is similar to what JP said. From an XR perspective, the sign of faster tech is pretty hard to tell. There are some weak arguments in favor of it being systematically good (eg that fast technological progress reduces natural risks over the long run, while not being systematically one direction or another for man-made risks). But this is pretty weak if we think the base rate of natural risks is very low. But of course I don’t just have an XR perspective, and from other perspectives (as JP mentions) the benefits of technological and economic progress are clearer.
As an aside, I think trying to shoehorn in “belief in progress” from an xrisk perspective is kinda dubious. Analogously, I think it’d be dumb to sell kids on the value of learning music because of purported benefits to transfer learning to mathematics standardized tests. Sure there may or may not be some cognitive benefits to learning music, but that’s not why I got into music, music is hardly the most efficient way to learn mathematics, and it’d be dubious if you primarily got into music to boost your math test scores.
What would moral/social progress actually look like?
I think a lot of the types of progress that some subset of EAs (including myself) are interested in is “specific technical or organizational solutions to risks that we think of as around the horizon.” In terms of high-level moral/social progress, this is something we don’t fully understand and still trying to have better clarity of.
“it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR” I think this is assigning more impact/agency/credit/blame to either XR or EAs than is reasonable. The rest of the world is big, and many people want to (often for selfish or narrowly altruistic reasons) indirectly work on economic progress to better their own lives, or that of people close to them. Some people like yourself also want to more broadly increase economic progress for more global reasons.
What does XR think about the large numbers of people who don’t appreciate progress, or actively oppose it?
In practice I don’t think much about them.
I think they’re probably wrong about this but they’re probably wrong about many other things too (as am I to be clear!), and I personally feel like (possibly overly arrogantly) that I understand their position well enough to reject them, or at least have made enough attempts at communication that further attempts aren’t very useful.
Note that this is just a personal take, reasonable people can disagree about how valuable this type of engagement is either in general or in this specific case, and I can also imagine (tho I think it’s unlikely) that people much more charitable and/or perceptive and/or diplomatic than myself can gain a lot value from those conversations.
Re the “highway of progress”/road trip metaphor:
I like the idea of a highway of progress.
A key distinction to me is whether you think of the highway as more of a road trip or more of a move (less metaphorically, whether the purpose is more to enjoy the journey or to go from Point A to Point B).
To extend and hopefully not butcher the metaphor too much, my stereotype of progress studies people is that they treat the highway of progress as pretty much like a road trip. That is, you have limited time and it’s really important to have a balance of a) an appropriate fast/steady pace along this highway of progress, along with b) enjoying yourself while you’re at it. Sure, it’s important to avoid unlikely catastrophes like driving off a cliff, but those things are unusual and the faster you go the more sights you see, etc. So the usual balance of considerations is something like consuming now (stopping and smelling the roses, in this metaphor) vs doing more stuff so we and our near descendants can enjoy more stuff later (traveling faster to see more sights, in this metaphor).
My own view** of the highway of progress is that we’re actually trying to go somewhere, perhaps with the intention of settling there forever. We’re leaving our unpleasant homeland to travel, along a dangerous road, to somewhere glorious and wonderful, perhaps a location we previously identified as potentially great (like New York), perhaps a nice suburb somewhere along the road.
(There are nicer and less nice versions of this metaphor. Perhaps we’re refugees from a wartorn country. Perhaps we’re new college graduates convinced that we can find a better life outside of our provincial hometowns).
So to many longtermist EAs, the destination matters much more than the journey.
In this regard, I think the story/framing where “humanity is trying to take an exciting but dangerous journey to lands known and unknown” (ie, trying to reach utopia) makes some sense as a story where exquisite care should be made.
Ultimately you have a lifetime to go from point A to point B, but you want to be very careful not to make rash irrevocable moves that means your journey ends prematurely.
existential risks can come from death (extinction risk) or from other ways that prematurely ends the journey
in the travel metaphor, you can be stuck in a suboptimal town and fool yourself into this being a great place to live
in our world, this can be dystopias or astronomical suffering, or just trapped in bad values
Notable that there were many past (admittedly non-existential) failures from people attempting to reach utopia
if you see the highway of progress as a road trip:
haste is really important (limited vacation time!)
you’d be a dumbass to be so careful that you barely visit anywhere.
journey matters more than destination
not reaching final destination totally fine.
safety is important but not critical
if you see the highway as a mode of one-way transit
fine to take your time
have a lifetime to get there
when you get there much less important than getting there at all.
obviously you’d still want to get there eventually, but since the trade is something on the order of 1% of astronomical waste every 10 million years delay, not too important how fast you are.
I haven’t heard of a satisfying explanation from Progress Studies folks about why urgency of economic growth is so important, at least for people who a) think there are percentage point probabilities of existential risk and b) agree with zero intrinsic discount rate.
Possible explanations:
There aren’t percentage points of existential risk,
alternatively, the net probability that” dedicated effort from humanity averting existential risks” can actually reduce xrisks is <<1%.
As a special case of the above, probability of all of humanity dying in the next ~1000 years approach 1 (cf Tyler Cowen, H/T Applied Divinity Studies)
This is an argument for working on medium-term economic progress over working on existential risk not because the risk is too low, but it’s too (unfixably) high.
the zero intrinsic discount rate is morally or empirically mistaken
temporary stagnation either inevitably or with high probability lead to permanent stagnation (so it is itself an existential risk).
Some combination of values pluralism + comparative advantage arguments, such that it makes some sense for some individuals to work on progress studies even if it is overall less overwhelmingly important than xrisk.
I find this very plausible but my general anecdotal feeling from you and others in this circle is that you’re usually making much stronger claims.
Something else
Clarifying which position individuals believe may be helpful here.
** to be clear, you don’t have to have this view to be a longtermist or an EA, but I do think this is much more common among the modal longtermist EA than the modal Progress studies fan.
How does XR weigh costs and benefits?
I don’t think there’s a uniform “existential risk reduction movement” answer to what % of X-risk reduction people are willing to trade for large harms today, any more than there’s a uniformly “effective altruism movement” answer for what the limits of altruism ought to be. For me personally, all the realistic tradeoffs I make day to day point towards reducing X-risk being of overwhelming importance, aside from model uncertainty.
In practice, this means I try to spend my time thinking about longtermism stuff, with mixed success, aside from a) stuff that builds my own skills or other ways that I can be more effective, b) small cooperative actions that I think are unusually good to values systems otherwise fairly similar to my own, c) entertainment, and d) bullshit.
I think probability of existential risk is high enough that I’m not in practice too worried about Pascal’s mugging issues. In the moments I doubt whether EA longtermism is stuff I ought to be working on, I’m much more worried about issues in the class of “randomly or systematically deluding myself about the probabilities of specific outcomes I’m worried about.” Rather than issues akin to “my entire decision procedure is wrong because expected value calculations are dumb.”
Does XR consider tech progress default-good or default-bad?
Basically my view is similar to what JP said. From an XR perspective, the sign of faster tech is pretty hard to tell. There are some weak arguments in favor of it being systematically good (eg that fast technological progress reduces natural risks over the long run, while not being systematically one direction or another for man-made risks). But this is pretty weak if we think the base rate of natural risks is very low. But of course I don’t just have an XR perspective, and from other perspectives (as JP mentions) the benefits of technological and economic progress are clearer.
As an aside, I think trying to shoehorn in “belief in progress” from an xrisk perspective is kinda dubious. Analogously, I think it’d be dumb to sell kids on the value of learning music because of purported benefits to transfer learning to mathematics standardized tests. Sure there may or may not be some cognitive benefits to learning music, but that’s not why I got into music, music is hardly the most efficient way to learn mathematics, and it’d be dubious if you primarily got into music to boost your math test scores.
What would moral/social progress actually look like?
I think a lot of the types of progress that some subset of EAs (including myself) are interested in is “specific technical or organizational solutions to risks that we think of as around the horizon.” In terms of high-level moral/social progress, this is something we don’t fully understand and still trying to have better clarity of.
“it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR” I think this is assigning more impact/agency/credit/blame to either XR or EAs than is reasonable. The rest of the world is big, and many people want to (often for selfish or narrowly altruistic reasons) indirectly work on economic progress to better their own lives, or that of people close to them. Some people like yourself also want to more broadly increase economic progress for more global reasons.
What does XR think about the large numbers of people who don’t appreciate progress, or actively oppose it?
In practice I don’t think much about them.
I think they’re probably wrong about this but they’re probably wrong about many other things too (as am I to be clear!), and I personally feel like (possibly overly arrogantly) that I understand their position well enough to reject them, or at least have made enough attempts at communication that further attempts aren’t very useful.
Note that this is just a personal take, reasonable people can disagree about how valuable this type of engagement is either in general or in this specific case, and I can also imagine (tho I think it’s unlikely) that people much more charitable and/or perceptive and/or diplomatic than myself can gain a lot value from those conversations.
Re the “highway of progress”/road trip metaphor:
I like the idea of a highway of progress.
A key distinction to me is whether you think of the highway as more of a road trip or more of a move (less metaphorically, whether the purpose is more to enjoy the journey or to go from Point A to Point B).
To extend and hopefully not butcher the metaphor too much, my stereotype of progress studies people is that they treat the highway of progress as pretty much like a road trip. That is, you have limited time and it’s really important to have a balance of a) an appropriate fast/steady pace along this highway of progress, along with b) enjoying yourself while you’re at it. Sure, it’s important to avoid unlikely catastrophes like driving off a cliff, but those things are unusual and the faster you go the more sights you see, etc. So the usual balance of considerations is something like consuming now (stopping and smelling the roses, in this metaphor) vs doing more stuff so we and our near descendants can enjoy more stuff later (traveling faster to see more sights, in this metaphor).
My own view** of the highway of progress is that we’re actually trying to go somewhere, perhaps with the intention of settling there forever. We’re leaving our unpleasant homeland to travel, along a dangerous road, to somewhere glorious and wonderful, perhaps a location we previously identified as potentially great (like New York), perhaps a nice suburb somewhere along the road.
(There are nicer and less nice versions of this metaphor. Perhaps we’re refugees from a wartorn country. Perhaps we’re new college graduates convinced that we can find a better life outside of our provincial hometowns).
So to many longtermist EAs, the destination matters much more than the journey.
In this regard, I think the story/framing where “humanity is trying to take an exciting but dangerous journey to lands known and unknown” (ie, trying to reach utopia) makes some sense as a story where exquisite care should be made.
Ultimately you have a lifetime to go from point A to point B, but you want to be very careful not to make rash irrevocable moves that means your journey ends prematurely.
existential risks can come from death (extinction risk) or from other ways that prematurely ends the journey
in the travel metaphor, you can be stuck in a suboptimal town and fool yourself into this being a great place to live
in our world, this can be dystopias or astronomical suffering, or just trapped in bad values
Notable that there were many past (admittedly non-existential) failures from people attempting to reach utopia
if you see the highway of progress as a road trip:
haste is really important (limited vacation time!)
you’d be a dumbass to be so careful that you barely visit anywhere.
journey matters more than destination
not reaching final destination totally fine.
safety is important but not critical
if you see the highway as a mode of one-way transit
fine to take your time
have a lifetime to get there
when you get there much less important than getting there at all.
obviously you’d still want to get there eventually, but since the trade is something on the order of 1% of astronomical waste every 10 million years delay, not too important how fast you are.
I haven’t heard of a satisfying explanation from Progress Studies folks about why urgency of economic growth is so important, at least for people who a) think there are percentage point probabilities of existential risk and b) agree with zero intrinsic discount rate.
Possible explanations:
There aren’t percentage points of existential risk,
alternatively, the net probability that” dedicated effort from humanity averting existential risks” can actually reduce xrisks is <<1%.
As a special case of the above, probability of all of humanity dying in the next ~1000 years approach 1 (cf Tyler Cowen, H/T Applied Divinity Studies)
This is an argument for working on medium-term economic progress over working on existential risk not because the risk is too low, but it’s too (unfixably) high.
the zero intrinsic discount rate is morally or empirically mistaken
temporary stagnation either inevitably or with high probability lead to permanent stagnation (so it is itself an existential risk).
Some combination of values pluralism + comparative advantage arguments, such that it makes some sense for some individuals to work on progress studies even if it is overall less overwhelmingly important than xrisk.
I find this very plausible but my general anecdotal feeling from you and others in this circle is that you’re usually making much stronger claims.
Something else
Clarifying which position individuals believe may be helpful here.
** to be clear, you don’t have to have this view to be a longtermist or an EA, but I do think this is much more common among the modal longtermist EA than the modal Progress studies fan.