Critical decisions about advanced technologies, including artificial intelligence, pandemic preparedness, or nuclear conflict, as well as policies shaping safety, leadership, and long-term wellbeing, depend on human psychology.
I am surprised by this. Ultimately, almost all of these decisions primarily happen in social and institutional contexts where most of the variance in outcomes is, arguably, not the result of individual psychology but of differences in institutional structures, culture, politics, economics, etc.
E.g. if one wanted to understand the context of these decisions better (which I think is critical!) shouldn’t this primarily motivate a social science research agenda focused on questions such as, for example, “how do get decisions about advanced technologies made?”, “what are the best leverage points?” etc.
Put somewhat differently, insofar as it a key insight of the social sciences (including economics) that societal outcomes cannot be reduced to individual-level psychology because they emerge from the (strategic) interaction and complex dynamics of billions of actors, I am surprised about this focus, at least insofar as the motivation is better understanding collective decision-making and actions taken in key GCR-areas.
Thanks for your thoughtful comment—I agree that social and institutional contexts are important for understanding these decisions. My research is rooted in social psychology, so it inherently considers these contexts. And I think individual-level factors like values, beliefs, and judgments are still essential, as they shape how people interact with institutions, respond to cultural norms, and make collective decisions. But of course, this is only one angle to study such issues.
I think we are relatively close and at the risk of misunderstanding.
I am not saying psychology isn’t part of this and that this work isn’t extremely valuable, I am a big fan of what you and Stefan are doing.
I would just say it is a fairly small part of the question of collective decision making / societal outcomes, e.g. if one wanted to start a program on understanding decision making in key GCR areas better then what I would expect in the next sentence would be something like “we are assembling a team of historians, political scientists, economists, social psychologists, etc.” not “here is a research agenda focused on psychology and behavioral science.” Maybe psychology and behavioral science were 5-20% of such an effort.
The reason I react strongly here is because I think EA has a tendency to underappreciate social sciences outside economics and we do so at our own peril, e.g. it seems likely that having more people trained in policy and social sciences would have avoided the blindspot of being late on AI governance, for example.
I am surprised by this. Ultimately, almost all of these decisions primarily happen in social and institutional contexts where most of the variance in outcomes is, arguably, not the result of individual psychology but of differences in institutional structures, culture, politics, economics, etc.
E.g. if one wanted to understand the context of these decisions better (which I think is critical!) shouldn’t this primarily motivate a social science research agenda focused on questions such as, for example, “how do get decisions about advanced technologies made?”, “what are the best leverage points?” etc.
Put somewhat differently, insofar as it a key insight of the social sciences (including economics) that societal outcomes cannot be reduced to individual-level psychology because they emerge from the (strategic) interaction and complex dynamics of billions of actors, I am surprised about this focus, at least insofar as the motivation is better understanding collective decision-making and actions taken in key GCR-areas.
Thanks for your thoughtful comment—I agree that social and institutional contexts are important for understanding these decisions. My research is rooted in social psychology, so it inherently considers these contexts. And I think individual-level factors like values, beliefs, and judgments are still essential, as they shape how people interact with institutions, respond to cultural norms, and make collective decisions. But of course, this is only one angle to study such issues.
For example, in the context of global catastrophic risks, my work explores how psychological factors intersect with the collective and institutions. Here are two examples:
Crying wolf: Warning about societal risks can be reputationally risky
Does One Person Make a Difference? The Many-One Bias in Judgments of Prosocial Action
I think we are relatively close and at the risk of misunderstanding.
I am not saying psychology isn’t part of this and that this work isn’t extremely valuable, I am a big fan of what you and Stefan are doing.
I would just say it is a fairly small part of the question of collective decision making / societal outcomes, e.g. if one wanted to start a program on understanding decision making in key GCR areas better then what I would expect in the next sentence would be something like “we are assembling a team of historians, political scientists, economists, social psychologists, etc.” not “here is a research agenda focused on psychology and behavioral science.” Maybe psychology and behavioral science were 5-20% of such an effort.
The reason I react strongly here is because I think EA has a tendency to underappreciate social sciences outside economics and we do so at our own peril, e.g. it seems likely that having more people trained in policy and social sciences would have avoided the blindspot of being late on AI governance, for example.