One may counter the importance of promoting positive values now by arguing that we are currently living at the most influential time in history as we are at a unique ātime of perilsā where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we donāt. In such a case we should be spending resources now on near-term existential risk mitigation, rather than on slower ābuck-passingā strategies that enable decision-makers to be as effective as possible in the future.
I think you may have in mind the argument that weāre living at the most influential time specifically due to extinction risk being high?
Also, if one does think extinction risk is currently much higher than other existential risks, itās still possible promoting positive values now is a top priority. E.g., if we spread moral concern for future generations and something like āthe virtue of prudently attending to low-probability, high-stakes risksā, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatās probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
But this is a minor point in the context of your article, and would in any case merely strengthen your core arguments.
Also, Iām not saying Iactually think value lock-in is likely soon, or that promoting positive values is a top-priority intervention for reducing extinction riskāIām merely saying these are plausible views one could hold.
I think you may have in mind the argument that weāre living at the most influential time specifically due toextinctionrisk being high?
I donāt think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential. My point is that this could happen very soon and it could be that most broad methods of promoting positive values such as altruism and concern for sentient beings (e.g. through promoting philosophy in schools) are just too slow and indirect to significantly reduce these technology-based existential threats in the short-run.
if we spread moral concern for future generations and something like āthe virtue of prudently attending to low-probability, high-stakes risksā, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatās probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
I agree with this. There may be some values spreading we can do now to those who are currently in power to reduce near term threats, but Iām unsure how tractable these efforts would be. Also I guess that such efforts are to some extent entailed in current AI /ā bio safety research work.
I may have been wrong to lump all values-building work in the same box, but I do see promoting philosophy in schools as something that will only bear fruit after a few decades when todayās children become those in influential positions, and even then it may only be modest effects in the short-run. To quote my post:
values-building would only be āfinishedā if we had figured out the āperfectā values and successfully embedded them into all influential institutions and people
Overall I guess I donāt see attempts to broadly promote positive values as being effective in countering any existential threat that may happen anytime soon (including value lock-in events). Itās probably only justified by appealing to the fact that we currently spend a lot of resources on near-term existential threats and that we should diversify in recognition of the possibility of there being value lock-in threats in the mid to far future too.
I donāt think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential.
Oh, yes, I shouldāve had āextinction and unrecoverable collapseā on one side and āvalue lock-in /ā unrecoverable dystopiaā on the other, rather than having only āextinctionā on the first side. My mistake.
I also largely agree with the rest of your comment. I think value promotion will tend to pay off slower than many (though not all) other longtermist interventions, and that this is true of promoting philosophy in schools in particular (which is the key point for this post).
I think you may have in mind the argument that weāre living at the most influential time specifically due to extinction risk being high?
One could also think weāre living at the most influential time specifically because value lock-in could happen soon. (And this could also mean existential risk is high, as existential risk includes not only extinction but also things like irreversible dystopias.) In that case, promoting positive values now could be a top priority.
Also, if one does think extinction risk is currently much higher than other existential risks, itās still possible promoting positive values now is a top priority. E.g., if we spread moral concern for future generations and something like āthe virtue of prudently attending to low-probability, high-stakes risksā, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatās probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
But this is a minor point in the context of your article, and would in any case merely strengthen your core arguments.
Also, Iām not saying I actually think value lock-in is likely soon, or that promoting positive values is a top-priority intervention for reducing extinction riskāIām merely saying these are plausible views one could hold.
I donāt think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential. My point is that this could happen very soon and it could be that most broad methods of promoting positive values such as altruism and concern for sentient beings (e.g. through promoting philosophy in schools) are just too slow and indirect to significantly reduce these technology-based existential threats in the short-run.
I agree with this. There may be some values spreading we can do now to those who are currently in power to reduce near term threats, but Iām unsure how tractable these efforts would be. Also I guess that such efforts are to some extent entailed in current AI /ā bio safety research work.
I may have been wrong to lump all values-building work in the same box, but I do see promoting philosophy in schools as something that will only bear fruit after a few decades when todayās children become those in influential positions, and even then it may only be modest effects in the short-run. To quote my post:
Overall I guess I donāt see attempts to broadly promote positive values as being effective in countering any existential threat that may happen anytime soon (including value lock-in events). Itās probably only justified by appealing to the fact that we currently spend a lot of resources on near-term existential threats and that we should diversify in recognition of the possibility of there being value lock-in threats in the mid to far future too.
Oh, yes, I shouldāve had āextinction and unrecoverable collapseā on one side and āvalue lock-in /ā unrecoverable dystopiaā on the other, rather than having only āextinctionā on the first side. My mistake.
I also largely agree with the rest of your comment. I think value promotion will tend to pay off slower than many (though not all) other longtermist interventions, and that this is true of promoting philosophy in schools in particular (which is the key point for this post).