This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesn’t actually bode so badly for the subsequent future, because “human nature” is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by one’s beliefs about human psychology, but it could also be mediated by one’s beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/extinction event caused by violent actions of humans. Again, the same idea that “human nature” is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldn’t show we’re irredeemably short-sighted.
Your mention of “one’s beliefs about technological development” does make me realise I’d focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like they’d collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (I’m open to hearing suggestions of things that wouldn’t, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether it’s tendency to be “good” or “bad”. That seems much more up to us and our psychology, and thus it’s a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isn’t a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that that’d be the case over a long time scale and if we’re willing to do things like changing our biology if necessary (and obviously with great caution).
Thanks!
This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesn’t actually bode so badly for the subsequent future, because “human nature” is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by one’s beliefs about human psychology, but it could also be mediated by one’s beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/extinction event caused by violent actions of humans. Again, the same idea that “human nature” is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldn’t show we’re irredeemably short-sighted.
Your mention of “one’s beliefs about technological development” does make me realise I’d focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like they’d collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (I’m open to hearing suggestions of things that wouldn’t, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether it’s tendency to be “good” or “bad”. That seems much more up to us and our psychology, and thus it’s a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isn’t a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that that’d be the case over a long time scale and if we’re willing to do things like changing our biology if necessary (and obviously with great caution).