Thanks for writing this! I found it quite interesting, and appreciated the clear engagement with the relevant academic literatures.
Some thoughts on the importance of this cause area
1. It seems like work to promote democracy could also be quite good from the perspective of reducing long-term risks from malevolent actors. And perhaps some interventions proposed in that post would also be good from the perspective of the other benefits of democracy. So there may be synergies between these two new/mini/sub cause areas.
2.
There are reasons to think democratization is important not only in the near term but also from a longtermist perspective. Political institutions are a stronger determinant of a country’s wealth than weather or culture. [6] There is substantial empirical evidence that economic development is highly path dependent—economic and political institutions persist for hundreds of years, and have corresponding consequences for economic development.[7] Because of long-term institutional persistence, improving democratic institutions today can lead to better institutions—and correspondingly better economic outcomes—not only in the near-term but also for the “long-term future.”
I think most “longtermists” don’t see increasing economic growth as especially valuable except in relation to how it affects where humanity “ends up” (e.g., via affecting existential risk, global catastrophic risk, or how wide our moral circles ultimately are). For example, Benjamin Todd from 80k writes:
One way to help the future we don’t think is a contender is speeding it up. Some people who want to help the future focus on bringing about technological progress, like developing new vaccines, and it’s true that these create long-term benefits. However, we think what most matters from a long-term perspective is where we end up, rather than how fast we get there. Discovering a new vaccine probably means we get it earlier, rather than making it happen at all.
I share that sort of view to some extent, though I think it’s slightly overstated, given that I think speeding up development could affect how much of the universe we can ultimately reach (this is related to the astronomical waste argument).
In my draft series on Crucial questions for longtermists, I include the question “How does speeding up development affect the expected value of the future?”, some sub-questions, and a collection of sources related to these questions. You or other readers might find those sources interesting.
(By the way, I’ve now added links to this post from that series, under the question “What are the best actions for speeding up development? How good are they?” and under the topic “Importance of, and best approaches to, improving institutions and/or decision-making”.)
3. I have a vague sense that EAs engaging in democracy promotion, especially under an explicitly EA banner, might have downsides such as making the Chinese government averse to EA, which would seem plausibly quite bad for other issues (e.g., ability to coordinate on AI safety or to help foster animal welfare communities in China).
I’d also obviously feel quite uncomfortable about not discussing any pro-democracy efforts for fear of upsetting non-democratic regimes. And all cause areas will face some downsides. But this does seem like something perhaps worth bearing in mind when deciding how much to prioritise this cause area against other cause areas that also plausibly deserve our resources anyway.
Thanks for this comment Michael—I think you make a great point about risks from malevolent actors. In terms of the longermist economic growth aspect, I was thinking more along the lines of institutional quality in the 1600′s explaining a lot of the more recent economic growth trajectories, with substantial consequences for global poverty.
Oh, yes, something I forgot to mention explicitly was that it sounded like you were talking primarily about timescales of centuries, which I don’t think is typically what longtermists are focused on. I think the typical view among longtermists is something like the following: “If things go well, humanity—or whatever we become—could last for such an incredibly long time that even a very small tweak to our trajectory, which lasts a substantial portion of that time, will ultimately matter a huge deal.* And it can matter much more than a larger ‘boost’ that would ultimately ‘wash out’ on the scale of years, decades, or centuries.”
This isn’t to say that economic growth isn’t important for longtermists, but rather that, if it is important to longtermists, that may be primarily because of its effects on other aspects of our trajectory. E.g., existential risk. (And it’s currently not totally clear whether it’s good or bad for x-risk, though I think the evidence leans somewhat towards it being good; see e.g. Existential Risk and Economic Growth. Other sources are linked to from my crucial questions series.)
Though growth could also matter more “directly”, because a faster spread to the stars may reduce the ultimate astronomical waste. (There are also longtermists who may not care about astronomical waste, such as suffering-focused longtermists.)
In any case, the way you made the argument felt more “medium-termist” than “longtermist” to me. I share that feeling partly because it may provide useful info regarding how persuasive other longtermists would find that argument, and whether they feel it’s really a “longtermist” argument.
*If you want to be mathy, you can think of this as the area between two slightly different curves ultimately being very large, if we travel a far enough distance along the x axis.
Thanks for writing this! I found it quite interesting, and appreciated the clear engagement with the relevant academic literatures.
Some thoughts on the importance of this cause area
1. It seems like work to promote democracy could also be quite good from the perspective of reducing long-term risks from malevolent actors. And perhaps some interventions proposed in that post would also be good from the perspective of the other benefits of democracy. So there may be synergies between these two new/mini/sub cause areas.
2.
I think most “longtermists” don’t see increasing economic growth as especially valuable except in relation to how it affects where humanity “ends up” (e.g., via affecting existential risk, global catastrophic risk, or how wide our moral circles ultimately are). For example, Benjamin Todd from 80k writes:
I share that sort of view to some extent, though I think it’s slightly overstated, given that I think speeding up development could affect how much of the universe we can ultimately reach (this is related to the astronomical waste argument).
In my draft series on Crucial questions for longtermists, I include the question “How does speeding up development affect the expected value of the future?”, some sub-questions, and a collection of sources related to these questions. You or other readers might find those sources interesting.
(By the way, I’ve now added links to this post from that series, under the question “What are the best actions for speeding up development? How good are they?” and under the topic “Importance of, and best approaches to, improving institutions and/or decision-making”.)
3. I have a vague sense that EAs engaging in democracy promotion, especially under an explicitly EA banner, might have downsides such as making the Chinese government averse to EA, which would seem plausibly quite bad for other issues (e.g., ability to coordinate on AI safety or to help foster animal welfare communities in China).
I’d also obviously feel quite uncomfortable about not discussing any pro-democracy efforts for fear of upsetting non-democratic regimes. And all cause areas will face some downsides. But this does seem like something perhaps worth bearing in mind when deciding how much to prioritise this cause area against other cause areas that also plausibly deserve our resources anyway.
Thanks for this comment Michael—I think you make a great point about risks from malevolent actors. In terms of the longermist economic growth aspect, I was thinking more along the lines of institutional quality in the 1600′s explaining a lot of the more recent economic growth trajectories, with substantial consequences for global poverty.
Oh, yes, something I forgot to mention explicitly was that it sounded like you were talking primarily about timescales of centuries, which I don’t think is typically what longtermists are focused on. I think the typical view among longtermists is something like the following: “If things go well, humanity—or whatever we become—could last for such an incredibly long time that even a very small tweak to our trajectory, which lasts a substantial portion of that time, will ultimately matter a huge deal.* And it can matter much more than a larger ‘boost’ that would ultimately ‘wash out’ on the scale of years, decades, or centuries.”
This isn’t to say that economic growth isn’t important for longtermists, but rather that, if it is important to longtermists, that may be primarily because of its effects on other aspects of our trajectory. E.g., existential risk. (And it’s currently not totally clear whether it’s good or bad for x-risk, though I think the evidence leans somewhat towards it being good; see e.g. Existential Risk and Economic Growth. Other sources are linked to from my crucial questions series.)
Though growth could also matter more “directly”, because a faster spread to the stars may reduce the ultimate astronomical waste. (There are also longtermists who may not care about astronomical waste, such as suffering-focused longtermists.)
In any case, the way you made the argument felt more “medium-termist” than “longtermist” to me. I share that feeling partly because it may provide useful info regarding how persuasive other longtermists would find that argument, and whether they feel it’s really a “longtermist” argument.
*If you want to be mathy, you can think of this as the area between two slightly different curves ultimately being very large, if we travel a far enough distance along the x axis.