I’m not making a claim about how effective our efforts can be. I’m asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal’s Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what’s the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
We should invest resources in AI safety? OK, I’m good with that. (I’m a little unclear on what we can actually do there that will help at this early stage, but that’s because I haven’t studied it in depth, and at this point I’m at least willing to believe that there are valuable programs there. So, thumbs up.)
We should raise our level of biosafety at labs around the world? Yes, absolutely. I’m in. Let’s do it.
We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it’s unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that’s not actually the proposal from any serious EA/XR folks? I am still unclear on this.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal’s Mugging.
Sure. I think most longtermists wouldn’t endorse this (though a small minority probably would).
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.
I don’t think this is negative, I think there are better opportunities to affect the future (along the lines of Ben’s comment).
I think this is mostly true of other EA / XR folks as well (or at least, if they think it is negative, they aren’t confident enough in it to actually say “please stop progress in general”). As I mentioned above, people (including me) might say it is negative in specific areas, such as AGI development, but not more broadly.
And it’s unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
I agree with that (and I think most others would too).
OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they’ll say it’s not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world’s population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged.
With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).
For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).
Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.
I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I’d still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.
I wouldn’t support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).
That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it’s important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.
I also lend my assistance to some neartermist EA research focused on growth, in areas that don’t very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Yup.
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
That’s what I would say.
I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.
So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don’t think the same people would do it for e.g. trillions of dollars. (I don’t know where in the middle it changes.)
I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not “fully” XR-focused, see e.g. Worldview Diversification.)
There’s a variant of attitude (1) which I think is worth pointing out:
b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.
Some arguments for (1b):
Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the ‘good’ types of progress more than the ‘bad’ ones, which seems possible.
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.
I don’t think anyone is proposing this. The debate I’m interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources).
The main claim isn’t that speeding up tech progress is bad,* just that it’s not the top priority at the margin vs. reducing x-risk or speeding up moral progress.**
One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk.
If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking.
In Toby’s book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today.
*Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it’s net good.
**This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.
I’m a little unclear on what we can actually do there that will help at this early stage
I’d suggest that this is a failure of imagination (sorry, I’m really not trying to criticise you, but I can’t find another phrase that captures my meaning!)
Like let’s just take it for granted that we aren’t going to be able to make any real research progress until we’re much closer to AGI. It still seems like there are several useful things we could be doing:
• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren’t like “That’s stupid, why would you care about that!”. Note that views tend to change generationally, so you need to start here early.
• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.
• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.
• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.
• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.
We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.
(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).
I’m not making a claim about how effective our efforts can be. I’m asking a more abstract, methodological question about how we weigh costs and benefits.
If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal’s Mugging.
If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.
And so then I just want to know, OK, what’s the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:
We should invest resources in AI safety? OK, I’m good with that. (I’m a little unclear on what we can actually do there that will help at this early stage, but that’s because I haven’t studied it in depth, and at this point I’m at least willing to believe that there are valuable programs there. So, thumbs up.)
We should raise our level of biosafety at labs around the world? Yes, absolutely. I’m in. Let’s do it.
We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.
But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it’s unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.
But maybe that’s not actually the proposal from any serious EA/XR folks? I am still unclear on this.
Sure. I think most longtermists wouldn’t endorse this (though a small minority probably would).
I don’t think this is negative, I think there are better opportunities to affect the future (along the lines of Ben’s comment).
I think this is mostly true of other EA / XR folks as well (or at least, if they think it is negative, they aren’t confident enough in it to actually say “please stop progress in general”). As I mentioned above, people (including me) might say it is negative in specific areas, such as AGI development, but not more broadly.
I agree with that (and I think most others would too).
OK, so maybe there are a few potential attitudes towards progress studies:
It’s definitely good and we should put resources to it
Eh, it’s fine but not really important and I’m not interested in it
It is actively harming the world by increasing x-risk, and we should stop it
I’ve been perceiving a lot of EA/XR folks to be in (3) but maybe you’re saying they’re more in (2)?
Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I’m somewhere between (1) and (2)… I think there are valuable things to do here, and I’m glad people are doing them, but I can’t see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).
Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we’re just disagreeing on relative priority and neglectedness.
(But I don’t think that’s all of it.)
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they’ll say it’s not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world’s population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged.
With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).
For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).
Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.
I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I’d still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.
I wouldn’t support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).
That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it’s important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.
I also lend my assistance to some neartermist EA research focused on growth, in areas that don’t very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.
Yup.
That’s what I would say.
If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.
So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don’t think the same people would do it for e.g. trillions of dollars. (I don’t know where in the middle it changes.)
I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not “fully” XR-focused, see e.g. Worldview Diversification.)
There’s a variant of attitude (1) which I think is worth pointing out:
b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.
Some arguments for (1b):
Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the ‘good’ types of progress more than the ‘bad’ ones, which seems possible.
Cool to see this thread!
Just a very quick comment on this:
I don’t think anyone is proposing this. The debate I’m interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources).
The main claim isn’t that speeding up tech progress is bad,* just that it’s not the top priority at the margin vs. reducing x-risk or speeding up moral progress.**
One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk.
If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking.
In Toby’s book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today.
*Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it’s net good.
**This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.
I’d suggest that this is a failure of imagination (sorry, I’m really not trying to criticise you, but I can’t find another phrase that captures my meaning!)
Like let’s just take it for granted that we aren’t going to be able to make any real research progress until we’re much closer to AGI. It still seems like there are several useful things we could be doing:
• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren’t like “That’s stupid, why would you care about that!”. Note that views tend to change generationally, so you need to start here early.
• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.
• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.
• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.
• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.
We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.
(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).