I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/āextinction event caused by violent actions of humans. Again, the same idea that āhuman natureā is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnāt show weāre irredeemably short-sighted.
Your mention of āoneās beliefs about technological developmentā does make me realise Iād focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyād collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iām open to hearing suggestions of things that wouldnāt, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itās tendency to be āgoodā or ābadā. That seems much more up to us and our psychology, and thus itās a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnāt a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatād be the case over a long time scale and if weāre willing to do things like changing our biology if necessary (and obviously with great caution).
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/āextinction event caused by violent actions of humans. Again, the same idea that āhuman natureā is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnāt show weāre irredeemably short-sighted.
Your mention of āoneās beliefs about technological developmentā does make me realise Iād focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyād collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iām open to hearing suggestions of things that wouldnāt, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itās tendency to be āgoodā or ābadā. That seems much more up to us and our psychology, and thus itās a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnāt a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatād be the case over a long time scale and if weāre willing to do things like changing our biology if necessary (and obviously with great caution).