Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.
One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree.
From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought, The Renaissance, The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).
I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected. There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but this could also mean that there are some really nice returns to highly competent efforts.
The second thing that I’d flag is that it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.
I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress.
I think I much agree with you here, though I myself am less interested in technical progress. I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-). I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there. Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.
I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right.
My take was written quickly and I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year. It’s hard for me to imagine what this would look like. There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition. I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.
In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.
I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.
Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful. It’s not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones).
Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.
From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought, The Renaissance, The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).
I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected. There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but this could also mean that there are some really nice returns to highly competent efforts.
The second thing that I’d flag is that it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.
I think I much agree with you here, though I myself am less interested in technical progress. I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-). I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there. Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.
My take was written quickly and I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year. It’s hard for me to imagine what this would look like. There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition. I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.
I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.
Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful. It’s not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones).