But it seems to me that this argument assumes a relatively stable, universal, and fixed āhuman natureā, and that thatās a quite questionable assumption.
For example, the fact that a person was going to start a nuclear war that wouldāve wiped out humanity may not give much evidence about how people tend to behave if in reality behaviours are quite influenced by situations. Nor would it give much evidence about how people in general tend to behave if behaviours vary substantially between different people. Even if behavioural patterns are quite stable and universal, if theyāre at least quite manipulable then the fact that person wouldāve started that war only gives strong evidence about current behavioural tendencies, not what weāre stuck with in the long term. (I believe this is somewhat similar to Cameron_Meyer_Shorbās point.)
Under any of those conditions, the fact that person wouldāve started that war provides little evidence about typical human behavioural patterns in the long term, and thus little evidence about the potential value of the long-term.
I suspect that thereās at least some substantial stability and universality to human behaviours. But on the other hand thereās certainly evidence that situational factors often important and that different people vary substantially (https://āāwww.ncbi.nlm.nih.gov/āāpubmed/āā20550733).
Personally, I suspect the most important factor is how manipulable human behavioural patterns are. The article cited above seems to show a huge degree to which āculturalā factors influence many behavioural patterns, even things we might assume are extremely basic or biologically determined like susceptibility to optical illusions. And such cultural factors typically arenāt even purposeful interventions, let alone scientific ones.
Itās of course true that a lot of scientific efforts to change behaviours fail, and even when they succeed they typically donāt succeed for everyone. But some things have worked on average. And the social sciences working on behavioural change are very young in the scheme of things, and their methods and theories are continually improving (especially after the replication crisis).
Thus, it seems very plausible to me that even within decade we could develop very successful methods of tempering violent inclinations, and that in centuries far more could be done. And thatās all just focusing on our āsoftwareāāefforts focusing on our biology itself could conceivably accomplish far more radical changes. That, of course, if we donāt wipe ourselves out before this can be done.
I recently heard someone on the 80,000 Hours podcast (canāt remember who or which episode, sorry) discussing the idea that we may not yet be ready, in terms of our āmaturityā or wisdom, for some of the technologies that seem to be around the corner. They gave the analogy that we might trust a child to have scissors but not an assault rifle. (Thatās a rough paraphrasing.)
So I think thereās something to your argument, but Iād also worry that weighting it too heavily would be somewhat akin to letting the child keep the gun based on the logic that, if something goes wrong, that shows the child wouldāve always been reckless anyway.
This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesnāt actually bode so badly for the subsequent future, because āhuman natureā is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by oneās beliefs about human psychology, but it could also be mediated by oneās beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/āextinction event caused by violent actions of humans. Again, the same idea that āhuman natureā is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnāt show weāre irredeemably short-sighted.
Your mention of āoneās beliefs about technological developmentā does make me realise Iād focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyād collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iām open to hearing suggestions of things that wouldnāt, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itās tendency to be āgoodā or ābadā. That seems much more up to us and our psychology, and thus itās a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnāt a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatād be the case over a long time scale and if weāre willing to do things like changing our biology if necessary (and obviously with great caution).
Very interesting post.
But it seems to me that this argument assumes a relatively stable, universal, and fixed āhuman natureā, and that thatās a quite questionable assumption.
For example, the fact that a person was going to start a nuclear war that wouldāve wiped out humanity may not give much evidence about how people tend to behave if in reality behaviours are quite influenced by situations. Nor would it give much evidence about how people in general tend to behave if behaviours vary substantially between different people. Even if behavioural patterns are quite stable and universal, if theyāre at least quite manipulable then the fact that person wouldāve started that war only gives strong evidence about current behavioural tendencies, not what weāre stuck with in the long term. (I believe this is somewhat similar to Cameron_Meyer_Shorbās point.)
Under any of those conditions, the fact that person wouldāve started that war provides little evidence about typical human behavioural patterns in the long term, and thus little evidence about the potential value of the long-term.
I suspect that thereās at least some substantial stability and universality to human behaviours. But on the other hand thereās certainly evidence that situational factors often important and that different people vary substantially (https://āāwww.ncbi.nlm.nih.gov/āāpubmed/āā20550733).
Personally, I suspect the most important factor is how manipulable human behavioural patterns are. The article cited above seems to show a huge degree to which āculturalā factors influence many behavioural patterns, even things we might assume are extremely basic or biologically determined like susceptibility to optical illusions. And such cultural factors typically arenāt even purposeful interventions, let alone scientific ones.
Itās of course true that a lot of scientific efforts to change behaviours fail, and even when they succeed they typically donāt succeed for everyone. But some things have worked on average. And the social sciences working on behavioural change are very young in the scheme of things, and their methods and theories are continually improving (especially after the replication crisis).
Thus, it seems very plausible to me that even within decade we could develop very successful methods of tempering violent inclinations, and that in centuries far more could be done. And thatās all just focusing on our āsoftwareāāefforts focusing on our biology itself could conceivably accomplish far more radical changes. That, of course, if we donāt wipe ourselves out before this can be done.
I recently heard someone on the 80,000 Hours podcast (canāt remember who or which episode, sorry) discussing the idea that we may not yet be ready, in terms of our āmaturityā or wisdom, for some of the technologies that seem to be around the corner. They gave the analogy that we might trust a child to have scissors but not an assault rifle. (Thatās a rough paraphrasing.)
So I think thereās something to your argument, but Iād also worry that weighting it too heavily would be somewhat akin to letting the child keep the gun based on the logic that, if something goes wrong, that shows the child wouldāve always been reckless anyway.
Thanks!
This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesnāt actually bode so badly for the subsequent future, because āhuman natureā is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by oneās beliefs about human psychology, but it could also be mediated by oneās beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/āextinction event caused by violent actions of humans. Again, the same idea that āhuman natureā is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnāt show weāre irredeemably short-sighted.
Your mention of āoneās beliefs about technological developmentā does make me realise Iād focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyād collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iām open to hearing suggestions of things that wouldnāt, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itās tendency to be āgoodā or ābadā. That seems much more up to us and our psychology, and thus itās a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnāt a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatād be the case over a long time scale and if weāre willing to do things like changing our biology if necessary (and obviously with great caution).