One may counter the importance of promoting positive values now by arguing that we are currently living at the most influential time in history as we are at a unique âtime of perilsâ where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we donât. In such a case we should be spending resources now on near-term existential risk mitigation, rather than on slower âbuck-passingâ strategies that enable decision-makers to be as effective as possible in the future.
I think you may have in mind the argument that weâre living at the most influential time specifically due to extinction risk being high?
Also, if one does think extinction risk is currently much higher than other existential risks, itâs still possible promoting positive values now is a top priority. E.g., if we spread moral concern for future generations and something like âthe virtue of prudently attending to low-probability, high-stakes risksâ, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatâs probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
But this is a minor point in the context of your article, and would in any case merely strengthen your core arguments.
Also, Iâm not saying Iactually think value lock-in is likely soon, or that promoting positive values is a top-priority intervention for reducing extinction riskâIâm merely saying these are plausible views one could hold.
I think you may have in mind the argument that weâre living at the most influential time specifically due toextinctionrisk being high?
I donât think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential. My point is that this could happen very soon and it could be that most broad methods of promoting positive values such as altruism and concern for sentient beings (e.g. through promoting philosophy in schools) are just too slow and indirect to significantly reduce these technology-based existential threats in the short-run.
if we spread moral concern for future generations and something like âthe virtue of prudently attending to low-probability, high-stakes risksâ, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatâs probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
I agree with this. There may be some values spreading we can do now to those who are currently in power to reduce near term threats, but Iâm unsure how tractable these efforts would be. Also I guess that such efforts are to some extent entailed in current AI /â bio safety research work.
I may have been wrong to lump all values-building work in the same box, but I do see promoting philosophy in schools as something that will only bear fruit after a few decades when todayâs children become those in influential positions, and even then it may only be modest effects in the short-run. To quote my post:
values-building would only be âfinishedâ if we had figured out the âperfectâ values and successfully embedded them into all influential institutions and people
Overall I guess I donât see attempts to broadly promote positive values as being effective in countering any existential threat that may happen anytime soon (including value lock-in events). Itâs probably only justified by appealing to the fact that we currently spend a lot of resources on near-term existential threats and that we should diversify in recognition of the possibility of there being value lock-in threats in the mid to far future too.
I donât think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential.
Oh, yes, I shouldâve had âextinction and unrecoverable collapseâ on one side and âvalue lock-in /â unrecoverable dystopiaâ on the other, rather than having only âextinctionâ on the first side. My mistake.
I also largely agree with the rest of your comment. I think value promotion will tend to pay off slower than many (though not all) other longtermist interventions, and that this is true of promoting philosophy in schools in particular (which is the key point for this post).
I think you may have in mind the argument that weâre living at the most influential time specifically due to extinction risk being high?
One could also think weâre living at the most influential time specifically because value lock-in could happen soon. (And this could also mean existential risk is high, as existential risk includes not only extinction but also things like irreversible dystopias.) In that case, promoting positive values now could be a top priority.
Also, if one does think extinction risk is currently much higher than other existential risks, itâs still possible promoting positive values now is a top priority. E.g., if we spread moral concern for future generations and something like âthe virtue of prudently attending to low-probability, high-stakes risksâ, that could lead to more resources going towards extinction risk reduction. (Though if we think the time of perils is quite short and right now, that probably pushes against promoting positive values, as thatâs probably a slower-moving intervention than interventions like directly improving AI safety or biosecurity.)
But this is a minor point in the context of your article, and would in any case merely strengthen your core arguments.
Also, Iâm not saying I actually think value lock-in is likely soon, or that promoting positive values is a top-priority intervention for reducing extinction riskâIâm merely saying these are plausible views one could hold.
I donât think I only mean extinction risk. We could have say a nuclear war and it not cause us to go extinct, but it be sufficiently harmful as to significantly curtail our future potential. My point is that this could happen very soon and it could be that most broad methods of promoting positive values such as altruism and concern for sentient beings (e.g. through promoting philosophy in schools) are just too slow and indirect to significantly reduce these technology-based existential threats in the short-run.
I agree with this. There may be some values spreading we can do now to those who are currently in power to reduce near term threats, but Iâm unsure how tractable these efforts would be. Also I guess that such efforts are to some extent entailed in current AI /â bio safety research work.
I may have been wrong to lump all values-building work in the same box, but I do see promoting philosophy in schools as something that will only bear fruit after a few decades when todayâs children become those in influential positions, and even then it may only be modest effects in the short-run. To quote my post:
Overall I guess I donât see attempts to broadly promote positive values as being effective in countering any existential threat that may happen anytime soon (including value lock-in events). Itâs probably only justified by appealing to the fact that we currently spend a lot of resources on near-term existential threats and that we should diversify in recognition of the possibility of there being value lock-in threats in the mid to far future too.
Oh, yes, I shouldâve had âextinction and unrecoverable collapseâ on one side and âvalue lock-in /â unrecoverable dystopiaâ on the other, rather than having only âextinctionâ on the first side. My mistake.
I also largely agree with the rest of your comment. I think value promotion will tend to pay off slower than many (though not all) other longtermist interventions, and that this is true of promoting philosophy in schools in particular (which is the key point for this post).