OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they’ll say it’s not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world’s population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged.
With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).
For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).
Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.
I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I’d still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.
I wouldn’t support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).
That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it’s important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.
I also lend my assistance to some neartermist EA research focused on growth, in areas that don’t very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Yup.
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
That’s what I would say.
I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.
So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don’t think the same people would do it for e.g. trillions of dollars. (I don’t know where in the middle it changes.)
I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not “fully” XR-focused, see e.g. Worldview Diversification.)
There’s a variant of attitude (1) which I think is worth pointing out:
b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.
Some arguments for (1b):
Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the ‘good’ types of progress more than the ‘bad’ ones, which seems possible.
OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
(But I don’t think that’s all of it.)
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they’ll say it’s not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world’s population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged.
With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).
For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).
Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.
I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I’d still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.
I wouldn’t support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).
That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it’s important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.
I also lend my assistance to some neartermist EA research focused on growth, in areas that don’t very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.
Yup.
That’s what I would say.
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.
So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don’t think the same people would do it for e.g. trillions of dollars. (I don’t know where in the middle it changes.)
I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not “fully” XR-focused, see e.g. Worldview Diversification.)
There’s a variant of attitude (1) which I think is worth pointing out:
b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.
Some arguments for (1b):
Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the ‘good’ types of progress more than the ‘bad’ ones, which seems possible.