Good post. I have been following worm wars, the case against randomistas, etc. At the risk of being blunt(and as someone with personal ties to randomista), I think it seems pretty certain that growth in almost any form is not what EAs should be focusing on in terms of actual research. So I disagree with the claim that the method of growth is a high impact space to evaluate especially when we haven’t settled whether growth in general is high or positive impact.
The long term effects (and by this I don’t mean if people will be happy ten years after growth occurs) are highly uncertain, and honestly to the best of my intuition, negative. Given the utter lack of any sort of unifying government on this planet, I think we have enough players as is. The topic is obviously a lot more nuanced than that, but I think it’s suffice to say that no one is about to come up with an airtight argument for how dev will make the world better in 100 years. Like many others have pointed out, it continues to surprise me how much we focus on direct or semi direct impacts in EA when most of us have accepted long termism.
The best piece of evidence by far imo, and it’s not even a very good one, is pritchetts claim on income/poverty negative correlation. And honestly, unless someone can completely dismantle that claim, its not difficult to see x-risk as a more effective anti-poverty measure, given the general upward trajectory of our planet. My epistemic status on that claim isn’t super high, but still.
That being said, empirical poverty research presents a very good recruiting tool for finding people who value the EA framework but haven’t had their third eye opened. I wonder if this could bite us in the butt at some point, but I don’t think EA has much of a choice unless it wants to be an even smaller, more idiosyncratic community than it currently is.
Thanks for this thoughtful comment! Thinking about x-risk reduction as giving us more time to grow the economy and alleviate poverty is really interesting.
While I agree the long-term effects are highly uncertain, I think it’s important to distinguish catch-up growth from frontier growth. Most growth accelerations in low-income countries bring them from “super poor” to “still pretty poor”. People in these countries live more comfortably, but they’re usually not getting rich enough to develop geopolitical ambitions that increase x-risk. (China and maybe India being notable exceptions.)
I’m actually not sure it’s true that “most of us have accepted longtermism.” As we say in this post, the Global Health and Development Fund is still the biggest EA Fund. Last year’s EA survey found that Global Poverty was still the most popular cause, and only 41% of respondents would choose the Long Term Future if they had to focus on one cause.
In any case, we might want to continue to have some EAs working on things other than longtermism in order to diversify in the face of moral uncertainty. And, as you say, having something useful and interesting to say about more mainstream causes is important for PR and movement growth. I thought the discussion of this point in the comments of this post was good.
It also seems like this comment could be made on any post that is not about long-termism, so there doesn’t to be anything especially relevant to this post here. If we don’t know whether growth is good in the long-term, then we presumably also don’t know whether eradicating malaria is either.
Also, I think growth plausibly is good from a long-termist point of view because it shortens the time of perils. It also has lots of beneficial political effects as it prevents zero sum rent seeking and encourages socially valuable activity.
Hi John, I’ll define here what I think you mean by “the time of perils”. I’ve heard of it before, but I had to Google it to refresh myself on what it means, and I got this from this forum post by Will MacAskill:
“According to the Time of Perils view, we live in a period of unusually high extinction risk, where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we don’t; after this point annual extinction risk will go to some very low level.”
It’s not clear to me how growth “shortens the time of perils” without increasing the extinction risk during the time of perils, which would be bad from a longtermist perspective. If we accelerate economic growth, we would likely accelerate climate change, and we would likely become more technologically advanced faster.
Being more technologically advanced at a faster rate would mean we have less time to research on how to mitigate existential risks from emerging technologies, i.e. how to build safe and aligned AGI. But I’m happy to hear counterarguments to this view!
P.S. I think you can put something like “(the time wherein existential risk is unusually high)” after you use the phrase “the time of perils”, so people not familiar with the term could better understand what you meant!
“The same technological progress that creates these risks is also what drives economic growth. Does that mean economic growth is inherently risky? Economic growth has brought about extraordinary prosperity. But for the sake of posterity, must we choose safe stagnation instead? This view is arguably becoming ever-more popular, particularly amongst those concerned about climate change; Greta Thunberg recently denounced “fairy tales of eternal economic growth” at the United Nations.
I argue that the opposite is the case. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.
We might indeed be in “time of perils”: we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. Eventually, a nuclear war or environmental catastrophe would doom humanity regardless.
Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly. When people are poor, they can’t focus on much beyond ensuring their own livelihoods. But as people grow richer, they start caring more about things like the environment and protecting against risks to life. And so, as economic growth makes people richer, they will invest more in safety, protecting against existential catastrophes. As technological innovation and our growing wealth has allowed us to conquer past threats to human life like smallpox, so can faster economic growth, in the long run, increase the overall chances of humanity’s survival.
This argument is based on a recent paper of mine, in which I use the tools of economic theory—in particular, the standard models economists use to analyze economic growth—to examine the interaction between economic growth and the risks engendered by human activity.”
Given the utter lack of any sort of unifying government on this planet, I think we have enough players as is.
It seems plausible that it would have helped to have more rich countries capable of lobbying against the nuclear arms race (in terms of reducing x-risk).
Good post. I have been following worm wars, the case against randomistas, etc. At the risk of being blunt(and as someone with personal ties to randomista), I think it seems pretty certain that growth in almost any form is not what EAs should be focusing on in terms of actual research. So I disagree with the claim that the method of growth is a high impact space to evaluate especially when we haven’t settled whether growth in general is high or positive impact.
The long term effects (and by this I don’t mean if people will be happy ten years after growth occurs) are highly uncertain, and honestly to the best of my intuition, negative. Given the utter lack of any sort of unifying government on this planet, I think we have enough players as is. The topic is obviously a lot more nuanced than that, but I think it’s suffice to say that no one is about to come up with an airtight argument for how dev will make the world better in 100 years. Like many others have pointed out, it continues to surprise me how much we focus on direct or semi direct impacts in EA when most of us have accepted long termism.
The best piece of evidence by far imo, and it’s not even a very good one, is pritchetts claim on income/poverty negative correlation. And honestly, unless someone can completely dismantle that claim, its not difficult to see x-risk as a more effective anti-poverty measure, given the general upward trajectory of our planet. My epistemic status on that claim isn’t super high, but still.
That being said, empirical poverty research presents a very good recruiting tool for finding people who value the EA framework but haven’t had their third eye opened. I wonder if this could bite us in the butt at some point, but I don’t think EA has much of a choice unless it wants to be an even smaller, more idiosyncratic community than it currently is.
Thanks for this thoughtful comment! Thinking about x-risk reduction as giving us more time to grow the economy and alleviate poverty is really interesting.
While I agree the long-term effects are highly uncertain, I think it’s important to distinguish catch-up growth from frontier growth. Most growth accelerations in low-income countries bring them from “super poor” to “still pretty poor”. People in these countries live more comfortably, but they’re usually not getting rich enough to develop geopolitical ambitions that increase x-risk. (China and maybe India being notable exceptions.)
I’m actually not sure it’s true that “most of us have accepted longtermism.” As we say in this post, the Global Health and Development Fund is still the biggest EA Fund. Last year’s EA survey found that Global Poverty was still the most popular cause, and only 41% of respondents would choose the Long Term Future if they had to focus on one cause.
In any case, we might want to continue to have some EAs working on things other than longtermism in order to diversify in the face of moral uncertainty. And, as you say, having something useful and interesting to say about more mainstream causes is important for PR and movement growth. I thought the discussion of this point in the comments of this post was good.
It also seems like this comment could be made on any post that is not about long-termism, so there doesn’t to be anything especially relevant to this post here. If we don’t know whether growth is good in the long-term, then we presumably also don’t know whether eradicating malaria is either.
Also, I think growth plausibly is good from a long-termist point of view because it shortens the time of perils. It also has lots of beneficial political effects as it prevents zero sum rent seeking and encourages socially valuable activity.
Hi John, I’ll define here what I think you mean by “the time of perils”. I’ve heard of it before, but I had to Google it to refresh myself on what it means, and I got this from this forum post by Will MacAskill:
It’s not clear to me how growth “shortens the time of perils” without increasing the extinction risk during the time of perils, which would be bad from a longtermist perspective. If we accelerate economic growth, we would likely accelerate climate change, and we would likely become more technologically advanced faster.
Being more technologically advanced at a faster rate would mean we have less time to research on how to mitigate existential risks from emerging technologies, i.e. how to build safe and aligned AGI. But I’m happy to hear counterarguments to this view!
P.S. I think you can put something like “(the time wherein existential risk is unusually high)” after you use the phrase “the time of perils”, so people not familiar with the term could better understand what you meant!
Leopold Aschenbrenner has written about this here.
“The same technological progress that creates these risks is also what drives economic growth. Does that mean economic growth is inherently risky? Economic growth has brought about extraordinary prosperity. But for the sake of posterity, must we choose safe stagnation instead? This view is arguably becoming ever-more popular, particularly amongst those concerned about climate change; Greta Thunberg recently denounced “fairy tales of eternal economic growth” at the United Nations.
I argue that the opposite is the case. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety.
We might indeed be in “time of perils”: we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. Eventually, a nuclear war or environmental catastrophe would doom humanity regardless.
Faster economic growth could initially increase risk, as feared. But it will also help us get past this time of perils more quickly. When people are poor, they can’t focus on much beyond ensuring their own livelihoods. But as people grow richer, they start caring more about things like the environment and protecting against risks to life. And so, as economic growth makes people richer, they will invest more in safety, protecting against existential catastrophes. As technological innovation and our growing wealth has allowed us to conquer past threats to human life like smallpox, so can faster economic growth, in the long run, increase the overall chances of humanity’s survival.
This argument is based on a recent paper of mine, in which I use the tools of economic theory—in particular, the standard models economists use to analyze economic growth—to examine the interaction between economic growth and the risks engendered by human activity.”
It seems plausible that it would have helped to have more rich countries capable of lobbying against the nuclear arms race (in terms of reducing x-risk).