As discussed in other comments, it seems that progress studies focuses mostly on economic and scientific progress, and these seem to come with risks as well as rewards. At the same time, particular aspects of progress seem more safe; the progress of epistemics or morality for example. Toby Ord wrote about the Long Reflection, as a method of making a lot of very specific progress before focusing on other kinds. These things are more difficult to study but might be more valuable.
So my question is, have you spent much time considering epistemic and moral progress (and other abstract but safe aspects) as a thing to study? Do you have any thoughts on its viability?
(I’ve written a bit more here, but it’s still relatively short).
The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!
One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree. We’ve probably come up with many good ideas already, but they haven’t gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.
Before I got into what would come to be called “progress studies”, I spent a quarter-century discussing and debating philosophic ideas with many different people, who had many different viewpoints. One thing that became clear to me was that, not only do people not agree on how to solve our problems, they don’t even agree on what the problems are. A left-wing environmentalist focuses on climate change, while a right-wing deficit hawk focuses on the national debt. Each thinks that even the problem the other one is so worried about is overblown, while their own problem is neglected. So of course they call for different policies.
I realized that a lot of the issues I care about, and the problems underlying them, were founded on my keen appreciation for the story of human progress: how bad living standards used to be and how much they’ve improved.
And, further, I thought that studying the history of progress—not just material, but epistemic and moral too, actually—would be the best way to empirically ground any claims about how to make the world better.
I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress. Science obviously supports technology. Freedom of thought and expression is needed for science. Economic freedom is needed for material progress. Technological progress provides the surplus that is needed to fund science, and invents the instruments that science needs too. Economic progress provides the means for a free society to defend itself militarily, and ultimately justifies and validates that society. So I don’t think they can be separated.
Long-term, I’d like to study moral and epistemic progress. I’d love to do a history of science, for instance. On moral progress, I’d love to read (or write!) about how we ended practices like slavery, dueling, and trial by ordeal; how we developed concepts like rule of law and individual rights; how we moved from tribalism to universalism and recognized the humanity of all races and sexes. Some of this is covered very well in Pinker’s recent books (Better Angels and Enlightenment Now) but more could be done.
Re the Long Reflection:
I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right. For one, to pause on material progress would come at a terrible cost: all of the lives we could be saving and extending, all the people we could be lifting out of poverty, all of the things we can’t even anticipate that would come from more wealth, technology and infrastructure.
For another, it seems to imply a very high degree of being able to anticipate and predict the future, which I think we just don’t have. I think David Deutsch captures this better than I can; from The Beginning of Infinity (pp 202–204):
… a recurring theme in pessimistic theories throughout history has been that an exceptionally dangerous moment is imminent. Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle.…
As we look back on the failed civilizations of the past, we can see that they were so poor, their technology was so feeble, and their explanations of the world so fragmentary and full of misconceptions that their caution about innovation and progress was as perverse as expecting a blindfold to be useful when navigating dangerous waters. Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?
When you look back at the history of progress, one theme is that it’s generally impossible to anticipate where progress will come from or what an advance will lead to. Who could have anticipated that studying electromagnetic radiation would give us ways to communicate long-distance, or to do non-invasive imaging inside the human body?
So to say, “let’s not do these risky things, let’s only do these safe things”, presumes that (a) we know what risks we are subject to and (b) we know what activities will lead towards or away from them, and towards or away from solutions. But I just don’t think we can predict those things, not at the level that a Long Reflection would imply.
If we had paused for Reflection in 2010, instead of founding Moderna and BioNTech to pursue mRNA vaccine technology, where would we be today vs. covid?
In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.
Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.
One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree.
From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought, The Renaissance, The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).
I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected. There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but this could also mean that there are some really nice returns to highly competent efforts.
The second thing that I’d flag is that it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.
I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress.
I think I much agree with you here, though I myself am less interested in technical progress. I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-). I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there. Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.
I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right.
My take was written quickly and I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year. It’s hard for me to imagine what this would look like. There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition. I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.
In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.
I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.
Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful. It’s not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones).
As discussed in other comments, it seems that progress studies focuses mostly on economic and scientific progress, and these seem to come with risks as well as rewards. At the same time, particular aspects of progress seem more safe; the progress of epistemics or morality for example. Toby Ord wrote about the Long Reflection, as a method of making a lot of very specific progress before focusing on other kinds. These things are more difficult to study but might be more valuable.
So my question is, have you spent much time considering epistemic and moral progress (and other abstract but safe aspects) as a thing to study? Do you have any thoughts on its viability?
(I’ve written a bit more here, but it’s still relatively short).
Re my own focus:
The irony is that my original motivation for studying progress was to better ground and validate my epistemic and moral ideas!
One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree. We’ve probably come up with many good ideas already, but they haven’t gotten wide enough adoption. So I think figuring out how to spread best practices is more high-leverage than making progress in these fields as such.
Before I got into what would come to be called “progress studies”, I spent a quarter-century discussing and debating philosophic ideas with many different people, who had many different viewpoints. One thing that became clear to me was that, not only do people not agree on how to solve our problems, they don’t even agree on what the problems are. A left-wing environmentalist focuses on climate change, while a right-wing deficit hawk focuses on the national debt. Each thinks that even the problem the other one is so worried about is overblown, while their own problem is neglected. So of course they call for different policies.
I realized that a lot of the issues I care about, and the problems underlying them, were founded on my keen appreciation for the story of human progress: how bad living standards used to be and how much they’ve improved.
And, further, I thought that studying the history of progress—not just material, but epistemic and moral too, actually—would be the best way to empirically ground any claims about how to make the world better.
I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress. Science obviously supports technology. Freedom of thought and expression is needed for science. Economic freedom is needed for material progress. Technological progress provides the surplus that is needed to fund science, and invents the instruments that science needs too. Economic progress provides the means for a free society to defend itself militarily, and ultimately justifies and validates that society. So I don’t think they can be separated.
Long-term, I’d like to study moral and epistemic progress. I’d love to do a history of science, for instance. On moral progress, I’d love to read (or write!) about how we ended practices like slavery, dueling, and trial by ordeal; how we developed concepts like rule of law and individual rights; how we moved from tribalism to universalism and recognized the humanity of all races and sexes. Some of this is covered very well in Pinker’s recent books (Better Angels and Enlightenment Now) but more could be done.
Re the Long Reflection:
I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right. For one, to pause on material progress would come at a terrible cost: all of the lives we could be saving and extending, all the people we could be lifting out of poverty, all of the things we can’t even anticipate that would come from more wealth, technology and infrastructure.
For another, it seems to imply a very high degree of being able to anticipate and predict the future, which I think we just don’t have. I think David Deutsch captures this better than I can; from The Beginning of Infinity (pp 202–204):
When you look back at the history of progress, one theme is that it’s generally impossible to anticipate where progress will come from or what an advance will lead to. Who could have anticipated that studying electromagnetic radiation would give us ways to communicate long-distance, or to do non-invasive imaging inside the human body?
So to say, “let’s not do these risky things, let’s only do these safe things”, presumes that (a) we know what risks we are subject to and (b) we know what activities will lead towards or away from them, and towards or away from solutions. But I just don’t think we can predict those things, not at the level that a Long Reflection would imply.
If we had paused for Reflection in 2010, instead of founding Moderna and BioNTech to pursue mRNA vaccine technology, where would we be today vs. covid?
In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.
Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.
From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought, The Renaissance, The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).
I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected. There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but this could also mean that there are some really nice returns to highly competent efforts.
The second thing that I’d flag is that it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.
I think I much agree with you here, though I myself am less interested in technical progress. I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-). I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there. Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.
My take was written quickly and I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year. It’s hard for me to imagine what this would look like. There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition. I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.
I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.
Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful. It’s not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones).