Thank you for writing this article! It’s interesting and important. My thoughts on the issue:
Long Reflection
I see a general tension between achieving existential security and putting sentient life on the best or an acceptable trajectory before we cease to be able to cooperate causally very well anymore because of long delays in communication.
A focus on achieving existential security pushes toward investing less time into getting all basic assumptions just right because all these investigations trade off against a terrible risk. I’ve read somewhere that homogeneity is good for early-stage startups because their main risk is in being not fast enough and not in getting something wrong. So people who are mainly concerned with existential risk may accept being very wrong about a lot of things so long as they still achieve existential security in time. I might call this “emergency mindset.”
Personally – I’m worried I’m likely biased here – I would rather like to precipitate the Long Reflection to avoid getting some things terribly wrong in the futures where we achieve existential security even if these investigations comes at some risk of diverting resources from reducing existential risk. I might call this “reflection mindset.”
There is probably some impartially optimal trade off here (plus comparative advantages of different people), and that trade off would also imply how much resources it is best to invest into avoiding homogeneity.
I’ve also commented on this on a recent blog article where I mention more caveats.
Ideas for Solutions
I’ve seen a bit of a shift toward reflection over emergency mindset at least since 2019 and more gradually since 2015. So if it turns out that we’re right and EA should err more in the direction of reflection, then a few things may aid that development.
Time
I’ve found that I need to rely a lot on others’ judgments on issues when I don’t have much time. But now that I have more time, I can investigate a lot of interesting questions myself and so need to rely less on the people I perceive as experts. Moreover, I’m less afraid to question expert opinions when I know something beyond the Cliff’s Notes about a topic, because I’ll be less likely to come off as arrogantly stupid.
So maybe it would help if people who are involved in EA in nonresearch positions were generally encouraged, incentivized, and allowed to take off more time to also learn things for themselves.
Money
The EA Funds could explicitly incentivize the above efforts but they could also explicitly incentivize broad literature research and summarization of topics and interviews with topic experts for topics that relate to foundational assumptions in EA projects.
I’ve actually seen a shift toward academic research over the past 3–4 years. And that seems valuable to continue (though my above reservations about my personal bias in the issue may apply). It is likely slower and maybe less focused. But academic environments are intellectually very different from EA, and professors in some field are very widely read in that field. So being in that environment and becoming a person that widely read people are happy to collaborate with should be very helpful in avoiding the particular homogeneities that the EA community comes with. (They’ll have homogeneities of their own of course.)
Thank you for writing this article! It’s interesting and important. My thoughts on the issue:
Long Reflection
I see a general tension between achieving existential security and putting sentient life on the best or an acceptable trajectory before we cease to be able to cooperate causally very well anymore because of long delays in communication.
A focus on achieving existential security pushes toward investing less time into getting all basic assumptions just right because all these investigations trade off against a terrible risk. I’ve read somewhere that homogeneity is good for early-stage startups because their main risk is in being not fast enough and not in getting something wrong. So people who are mainly concerned with existential risk may accept being very wrong about a lot of things so long as they still achieve existential security in time. I might call this “emergency mindset.”
Personally – I’m worried I’m likely biased here – I would rather like to precipitate the Long Reflection to avoid getting some things terribly wrong in the futures where we achieve existential security even if these investigations comes at some risk of diverting resources from reducing existential risk. I might call this “reflection mindset.”
There is probably some impartially optimal trade off here (plus comparative advantages of different people), and that trade off would also imply how much resources it is best to invest into avoiding homogeneity.
I’ve also commented on this on a recent blog article where I mention more caveats.
Ideas for Solutions
I’ve seen a bit of a shift toward reflection over emergency mindset at least since 2019 and more gradually since 2015. So if it turns out that we’re right and EA should err more in the direction of reflection, then a few things may aid that development.
Time
I’ve found that I need to rely a lot on others’ judgments on issues when I don’t have much time. But now that I have more time, I can investigate a lot of interesting questions myself and so need to rely less on the people I perceive as experts. Moreover, I’m less afraid to question expert opinions when I know something beyond the Cliff’s Notes about a topic, because I’ll be less likely to come off as arrogantly stupid.
So maybe it would help if people who are involved in EA in nonresearch positions were generally encouraged, incentivized, and allowed to take off more time to also learn things for themselves.
Money
The EA Funds could explicitly incentivize the above efforts but they could also explicitly incentivize broad literature research and summarization of topics and interviews with topic experts for topics that relate to foundational assumptions in EA projects.
“Growth and the Case Against Randomista Development” seems like a particular impressive example of such an investigation.
Academic Research
I’ve actually seen a shift toward academic research over the past 3–4 years. And that seems valuable to continue (though my above reservations about my personal bias in the issue may apply). It is likely slower and maybe less focused. But academic environments are intellectually very different from EA, and professors in some field are very widely read in that field. So being in that environment and becoming a person that widely read people are happy to collaborate with should be very helpful in avoiding the particular homogeneities that the EA community comes with. (They’ll have homogeneities of their own of course.)