Iâd guess the story might be a) âXR primacyâ (~~ that x-risk reduction has far bigger bang for oneâs buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally âbuying the indexâ of technological development (as I take Progress Studies to be keen on) to be uncertain.
âXR primacyâ
Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/ârunway available if you need to use it, so velocity and acceleration are in themselves much less important than directionâyou can cover much more ground in expectation if you make sure youâre not headed into a crash first.
This typically (but not necessarily, cf.) implies longtermism. âGlobal catastrophic riskâ, as a longtermist term of art, plausibly excludes the vast majority of things common sense would call âglobal catastrophesâ. E.g.:
[W]e use the term âglobal catastrophic risksâ to refer to risks that could be globally destabilizing enough to permanently worsen humanityâs future or lead to human extinction. (Open Phil)
My impression is a âcentury more povertyâ probably isnât a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasnât globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination.
This makes itâs continued existence no less an outrage to human condition. But, across the scales from threats to humankindâs entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not thereâs any direct cross-purposes in activity) the currency of XR reduction has much greater value.
Per discussion, there are a variety of ways the story sketched above could be wrong:
Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
XR is either very low, or intractable, so XR reduction isnât a good buy even on the exchange rate XR views endorse.
Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.
I donât see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/ânon-pascalian values. Although costly activity that buys an absolute risk reduction of 1/âtrillions looks dubious to common sense, 1/âthousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough.
Itâs not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldnât endorse them, it doesnât have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).
Buying the technological progress index?
Granting the story sketched above, thereâs not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.
Thereâs obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. Itâd also generally surprise for what is best for XR to also be best for âprogressâ (cf.)
The recent track record doesnât seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and âderiskingâ these downsides remain remote. Itâs hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.
Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/âenthusiastic for each particular case). Iâd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/âbest means to mitigate the latter), which then feeds into more or less âgeneralized techno-optimismâ.
But Iâd guess the majority of the action is around the âmodal XR accountâ of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. âTechnocircumspectionâ seems a fairly sound corollary from this set of controversial conjuncts.
Iâd guess the story might be a) âXR primacyâ (~~ that x-risk reduction has far bigger bang for oneâs buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally âbuying the indexâ of technological development (as I take Progress Studies to be keen on) to be uncertain.
âXR primacyâ
Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/ârunway available if you need to use it, so velocity and acceleration are in themselves much less important than directionâyou can cover much more ground in expectation if you make sure youâre not headed into a crash first.
This typically (but not necessarily, cf.) implies longtermism. âGlobal catastrophic riskâ, as a longtermist term of art, plausibly excludes the vast majority of things common sense would call âglobal catastrophesâ. E.g.:
My impression is a âcentury more povertyâ probably isnât a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasnât globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination.
This makes itâs continued existence no less an outrage to human condition. But, across the scales from threats to humankindâs entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not thereâs any direct cross-purposes in activity) the currency of XR reduction has much greater value.
Per discussion, there are a variety of ways the story sketched above could be wrong:
Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
XR is either very low, or intractable, so XR reduction isnât a good buy even on the exchange rate XR views endorse.
Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.
I donât see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/ânon-pascalian values. Although costly activity that buys an absolute risk reduction of 1/âtrillions looks dubious to common sense, 1/âthousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough.
Itâs not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldnât endorse them, it doesnât have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).
Buying the technological progress index?
Granting the story sketched above, thereâs not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.
Thereâs obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. Itâd also generally surprise for what is best for XR to also be best for âprogressâ (cf.)
The recent track record doesnât seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and âderiskingâ these downsides remain remote. Itâs hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.
Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/âenthusiastic for each particular case). Iâd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/âbest means to mitigate the latter), which then feeds into more or less âgeneralized techno-optimismâ.
But Iâd guess the majority of the action is around the âmodal XR accountâ of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. âTechnocircumspectionâ seems a fairly sound corollary from this set of controversial conjuncts.