But it seems to me that this argument assumes a relatively stable, universal, and fixed âhuman natureâ, and that thatâs a quite questionable assumption.
For example, the fact that a person was going to start a nuclear war that wouldâve wiped out humanity may not give much evidence about how people tend to behave if in reality behaviours are quite influenced by situations. Nor would it give much evidence about how people in general tend to behave if behaviours vary substantially between different people. Even if behavioural patterns are quite stable and universal, if theyâre at least quite manipulable then the fact that person wouldâve started that war only gives strong evidence about current behavioural tendencies, not what weâre stuck with in the long term. (I believe this is somewhat similar to Cameron_Meyer_Shorbâs point.)
Under any of those conditions, the fact that person wouldâve started that war provides little evidence about typical human behavioural patterns in the long term, and thus little evidence about the potential value of the long-term.
I suspect that thereâs at least some substantial stability and universality to human behaviours. But on the other hand thereâs certainly evidence that situational factors often important and that different people vary substantially (https://ââwww.ncbi.nlm.nih.gov/ââpubmed/ââ20550733).
Personally, I suspect the most important factor is how manipulable human behavioural patterns are. The article cited above seems to show a huge degree to which âculturalâ factors influence many behavioural patterns, even things we might assume are extremely basic or biologically determined like susceptibility to optical illusions. And such cultural factors typically arenât even purposeful interventions, let alone scientific ones.
Itâs of course true that a lot of scientific efforts to change behaviours fail, and even when they succeed they typically donât succeed for everyone. But some things have worked on average. And the social sciences working on behavioural change are very young in the scheme of things, and their methods and theories are continually improving (especially after the replication crisis).
Thus, it seems very plausible to me that even within decade we could develop very successful methods of tempering violent inclinations, and that in centuries far more could be done. And thatâs all just focusing on our âsoftwareââefforts focusing on our biology itself could conceivably accomplish far more radical changes. That, of course, if we donât wipe ourselves out before this can be done.
I recently heard someone on the 80,000 Hours podcast (canât remember who or which episode, sorry) discussing the idea that we may not yet be ready, in terms of our âmaturityâ or wisdom, for some of the technologies that seem to be around the corner. They gave the analogy that we might trust a child to have scissors but not an assault rifle. (Thatâs a rough paraphrasing.)
So I think thereâs something to your argument, but Iâd also worry that weighting it too heavily would be somewhat akin to letting the child keep the gun based on the logic that, if something goes wrong, that shows the child wouldâve always been reckless anyway.
This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesnât actually bode so badly for the subsequent future, because âhuman natureâ is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by oneâs beliefs about human psychology, but it could also be mediated by oneâs beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/âextinction event caused by violent actions of humans. Again, the same idea that âhuman natureâ is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnât show weâre irredeemably short-sighted.
Your mention of âoneâs beliefs about technological developmentâ does make me realise Iâd focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyâd collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iâm open to hearing suggestions of things that wouldnât, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itâs tendency to be âgoodâ or âbadâ. That seems much more up to us and our psychology, and thus itâs a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnât a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatâd be the case over a long time scale and if weâre willing to do things like changing our biology if necessary (and obviously with great caution).
Very interesting post.
But it seems to me that this argument assumes a relatively stable, universal, and fixed âhuman natureâ, and that thatâs a quite questionable assumption.
For example, the fact that a person was going to start a nuclear war that wouldâve wiped out humanity may not give much evidence about how people tend to behave if in reality behaviours are quite influenced by situations. Nor would it give much evidence about how people in general tend to behave if behaviours vary substantially between different people. Even if behavioural patterns are quite stable and universal, if theyâre at least quite manipulable then the fact that person wouldâve started that war only gives strong evidence about current behavioural tendencies, not what weâre stuck with in the long term. (I believe this is somewhat similar to Cameron_Meyer_Shorbâs point.)
Under any of those conditions, the fact that person wouldâve started that war provides little evidence about typical human behavioural patterns in the long term, and thus little evidence about the potential value of the long-term.
I suspect that thereâs at least some substantial stability and universality to human behaviours. But on the other hand thereâs certainly evidence that situational factors often important and that different people vary substantially (https://ââwww.ncbi.nlm.nih.gov/ââpubmed/ââ20550733).
Personally, I suspect the most important factor is how manipulable human behavioural patterns are. The article cited above seems to show a huge degree to which âculturalâ factors influence many behavioural patterns, even things we might assume are extremely basic or biologically determined like susceptibility to optical illusions. And such cultural factors typically arenât even purposeful interventions, let alone scientific ones.
Itâs of course true that a lot of scientific efforts to change behaviours fail, and even when they succeed they typically donât succeed for everyone. But some things have worked on average. And the social sciences working on behavioural change are very young in the scheme of things, and their methods and theories are continually improving (especially after the replication crisis).
Thus, it seems very plausible to me that even within decade we could develop very successful methods of tempering violent inclinations, and that in centuries far more could be done. And thatâs all just focusing on our âsoftwareââefforts focusing on our biology itself could conceivably accomplish far more radical changes. That, of course, if we donât wipe ourselves out before this can be done.
I recently heard someone on the 80,000 Hours podcast (canât remember who or which episode, sorry) discussing the idea that we may not yet be ready, in terms of our âmaturityâ or wisdom, for some of the technologies that seem to be around the corner. They gave the analogy that we might trust a child to have scissors but not an assault rifle. (Thatâs a rough paraphrasing.)
So I think thereâs something to your argument, but Iâd also worry that weighting it too heavily would be somewhat akin to letting the child keep the gun based on the logic that, if something goes wrong, that shows the child wouldâve always been reckless anyway.
Thanks!
This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesnât actually bode so badly for the subsequent future, because âhuman natureâ is so malleable.
Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by oneâs beliefs about human psychology, but it could also be mediated by oneâs beliefs about technological development or many other things.
I was also using the nuclear war example just to illustrate my argument. You could substitute in any other catastrophe/âextinction event caused by violent actions of humans. Again, the same idea that âhuman natureâ is variable and (most importantly) malleable would suggest that the potential for this extinction event provides relatively little evidence about the value of the long-term. And I think the same would go for anything else determined by other aspects of human psychology, such as short-sightedness rather than violence (e.g., ignoring consequences of AI advancement or carbon emissions), because again that wouldnât show weâre irredeemably short-sighted.
Your mention of âoneâs beliefs about technological developmentâ does make me realise Iâd focused only on what the potential for an extinction event might reveal about human psychology, not what it might reveal about other things. But most relevant other things that come to mind seem to me like theyâd collapse back to human psychology, and thus my argument would still apply in just somewhat modified form. (Iâm open to hearing suggestions of things that wouldnât, though.)
For example, the laws of physics seem to me likely to determine the limits of technological development, but not whether itâs tendency to be âgoodâ or âbadâ. That seems much more up to us and our psychology, and thus itâs a tendency that could change if we change ourselves. Same goes for things like whether institutions are typically effective; that isnât a fixed property of the world, but rather a result of our psychology (as well as our history, current circumstances, etc.), and thus changeable, especially over very long time scales.
The main way I can imagine I could be wrong is if we do turn out to be essentially unable to substantially shift human psychology. But it seems to me extremely unlikely that thatâd be the case over a long time scale and if weâre willing to do things like changing our biology if necessary (and obviously with great caution).