Why is any field different from any other? For example, physicists and engineers learn about fermi-estimation and first-principles reasoning, while ecologists and economists build thinking tools based on an assumption of competitive equilibrium, while lawyers and historians learn to compare the trustworthiness of conflicting sources by corroborating multiple lines of circumstantial evidence. Different fields develop different styles of thinking presumably because those styles are helpful for solving the problems the field exists to make progress on.
But of course, fields also have incidental cultural differences—maybe economists are disproportionately New Yorkers who like bagels and the big city, while ecologists are disproportionately Coloradans who like granola and mountain hiking trails. It would be a shame if someone who could be a brilliant economist got turned off of that career track just because they didn’t like the idea of bagel brunches.
I mention this because it seems like there are a few different things you could be saying, and I am confused about which ones you mean:
The “core” thinking tools of EA need to be improved by an infusion of humanities-ish thinking. Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.
The “core” thinking tools of EA are great and don’t need to change, but STEM style is only weakly correlated with those core thinking tools. We’re letting great potential EAs slip through the cracks because we’re stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the “core” skills. STEM style is more like an incidental cultural difference than a reliable indicator of “core” EA mindset.
The “core” thinking tools of EA are great, and STEM style is a good-enough approximation for them, but nevertheless every large organization/movement needs a little bit of everything—even pure engineering firms like Lockheed Martin also need legions of managers, event planners, CGI artists, copyeditors, HR staff, etc. For this and other reasons (like the movement’s wider reputation), EA should try to make itself accommodating and approachable for non-STEM thinkers even if we believe that the core mission is inherently STEM-ish.
In general, it never hurts for individual people to try harder to listen and understand other people’s diverse perspectives, since that is often the best way to learn something really new.
Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields. So I am skeptical that there is much substance behind claims #1 or #2 in general. Of course I’d be very open to considering particular examples of ways that the “core” EA ideas should be changed, or ways that STEM-ishness makes a poor proxy for EA promisingness.
I am totally on board with #3 and #4, although they don’t imply as much of a course-correction for the movement as a whole.
Despite feeling defensive about STEM values in EA, I also see that the growth of any movement is necessarily a tightrope walk between conserving too many arbitrary values, versus throwing the baby out with the bathwater. If EA had stuck to its very early days of being a movement composed mostly of academic moral philosophers who donated 50%+ of their income to global health charities, it would never have grown and done as much good as it has IRL. But it would also be pretty pointless for EA to dilute itself to the point of becoming indistinguishable from present-day mainstream thinking about charity.
Some bad visions of ways that I’d hate to see us move away from STEM norms (albeit not all of these are moving towards humanities):
If EA sank back towards the general mess of scope neglect, aversion to hits-based-giving, blame-avoidance / “copenhagen interpretation of ethics”, and unconcern about impact that it was originally founded to rise above.
If EA was drawn into conflict-oriented political ideologies like wokeism.
If EA stopped being “weird” and thinking from first principles about what seems important (for instance, recognizing the huge potential danger of unaligned AI), instead placing more weight on following whatever official Very Serious Issues are popular among journalists, celebrities, etc.
If EA drifted into the elaborate, obfuscated writing style of some academic fields, like postmodern literary theory or continental philosophy.
[Losing some crucial aspects of rationality that I find hard to put into words.]
Some areas where I feel EA has drawn inspiration from humanities in a very positive way:
The fact that EA is an ambitious big-tent moral & social movement at all, rather than a narrower technical field of “evaluating charities like investments” as perhaps envisioned when Givewell was originally founded.
The interest in big philosophical questions about humanity and civilization, and the fact that imaginative speculation is acceptable & encouraged.
The ideas of effective altruism are often developed through writing and debate and communication, rather than primarily through experiment or observation in a formal sense. The overall style of being “interdisciplinary” and generalist in its approach to many problems.
(My personal background: I am an aerospace engineer by trade, although I have a naturally generalist personality and most of my posts on the Forum to date have been weird creative-writing type things rather than hard-nosed technical analyses. My gut reaction to this post was negative, but on rereading the post I think my reaction was unjustified; I was just projecting the fears that I listed above onto areas of the post that were vague.)
I guess my conclusion is that I am psyched about #3 and #4 -- it’s always good to be welcoming and creative in helping people find ways to contribute their unique talents. And then, past that… there are a bunch of tricky fundamental movement-growth issues that I am really confused about!
(Meta: I am afraid that I am strawmaning your position because I do not understand it correctly, so please let me know if that is the case )
Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields. So I am skeptical that there is much substance behind claims #1 or #2 in general.
I agree with you that it seems plausible that the unique thinking style of EA has been essential to a lot of the successes achieved by EA + that those are closely related to STEM fields.
The “core” thinking tools of EA need to be improved by an infusion of humanities-ish thinking. Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.
But it is unclear to me why this should imply that #1 is wrong. EA wants to achieve this massive goal of doing the most good. This makes it very important to get a highly accurate map of the territory we are operating in. Taking that into account, it is a very strong claim that we are confident that the “core” thinking tools we have used so far are the best we could be using and that we do not need to look at the tools that other fields are using before we decide that ours are actually the best. This is especially true since we do lack a bunch of academic disciplines in EA. Most EA ideas and thinking tools are from western analytic philosophy and STEM research. And that does not mean they are wrong—it could be that they all turn out to be correct—but they do encompass only a small portion of all knowledge out there. I dare you to chat to a philosopher who researches non-western epistemology—your mind will be blown by how different it is.
More generally: The fact that it is sometimes hard to understand people from very different fields is why it is so incredibly important and valuable to try to get those people into EA. They usually view the world through a very different lens and can check whether they see an aspect of the territory we do not see that we should incorporate into EA.
I am afraid that we are so confident in the tools we have that we do not spend enough time trying to understand how other fields think and therefore miss out on an important part of reality.
To be clear: I think that a big chunk of what makes EA special is related to STEM style reasoning and we should probably try hard to hold onto it.
2. The “core” thinking tools of EA are great and don’t need to change, but STEM style is only weakly correlated with those core thinking tools. We’re letting great potential EAs slip through the cracks because we’re stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the “core” skills. STEM style is more like an incidental cultural difference than a reliable indicator of “core” EA mindset.
Small thing: It is unclear to me whether we get a lot of false positives + this was also not the claim of the post if I understand it correctly.
(Warning: rambly hand-wavy comment incoming!)
Why is any field different from any other? For example, physicists and engineers learn about fermi-estimation and first-principles reasoning, while ecologists and economists build thinking tools based on an assumption of competitive equilibrium, while lawyers and historians learn to compare the trustworthiness of conflicting sources by corroborating multiple lines of circumstantial evidence. Different fields develop different styles of thinking presumably because those styles are helpful for solving the problems the field exists to make progress on.
But of course, fields also have incidental cultural differences—maybe economists are disproportionately New Yorkers who like bagels and the big city, while ecologists are disproportionately Coloradans who like granola and mountain hiking trails. It would be a shame if someone who could be a brilliant economist got turned off of that career track just because they didn’t like the idea of bagel brunches.
I mention this because it seems like there are a few different things you could be saying, and I am confused about which ones you mean:
The “core” thinking tools of EA need to be improved by an infusion of humanities-ish thinking. Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.
The “core” thinking tools of EA are great and don’t need to change, but STEM style is only weakly correlated with those core thinking tools. We’re letting great potential EAs slip through the cracks because we’re stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the “core” skills. STEM style is more like an incidental cultural difference than a reliable indicator of “core” EA mindset.
The “core” thinking tools of EA are great, and STEM style is a good-enough approximation for them, but nevertheless every large organization/movement needs a little bit of everything—even pure engineering firms like Lockheed Martin also need legions of managers, event planners, CGI artists, copyeditors, HR staff, etc. For this and other reasons (like the movement’s wider reputation), EA should try to make itself accommodating and approachable for non-STEM thinkers even if we believe that the core mission is inherently STEM-ish.
In general, it never hurts for individual people to try harder to listen and understand other people’s diverse perspectives, since that is often the best way to learn something really new.
Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields. So I am skeptical that there is much substance behind claims #1 or #2 in general. Of course I’d be very open to considering particular examples of ways that the “core” EA ideas should be changed, or ways that STEM-ishness makes a poor proxy for EA promisingness.
I am totally on board with #3 and #4, although they don’t imply as much of a course-correction for the movement as a whole.
Despite feeling defensive about STEM values in EA, I also see that the growth of any movement is necessarily a tightrope walk between conserving too many arbitrary values, versus throwing the baby out with the bathwater. If EA had stuck to its very early days of being a movement composed mostly of academic moral philosophers who donated 50%+ of their income to global health charities, it would never have grown and done as much good as it has IRL. But it would also be pretty pointless for EA to dilute itself to the point of becoming indistinguishable from present-day mainstream thinking about charity.
Some bad visions of ways that I’d hate to see us move away from STEM norms (albeit not all of these are moving towards humanities):
If EA sank back towards the general mess of scope neglect, aversion to hits-based-giving, blame-avoidance / “copenhagen interpretation of ethics”, and unconcern about impact that it was originally founded to rise above.
If EA was drawn into conflict-oriented political ideologies like wokeism.
If EA stopped being “weird” and thinking from first principles about what seems important (for instance, recognizing the huge potential danger of unaligned AI), instead placing more weight on following whatever official Very Serious Issues are popular among journalists, celebrities, etc.
If EA drifted into the elaborate, obfuscated writing style of some academic fields, like postmodern literary theory or continental philosophy.
[Losing some crucial aspects of rationality that I find hard to put into words.]
Some areas where I feel EA has drawn inspiration from humanities in a very positive way:
The fact that EA is an ambitious big-tent moral & social movement at all, rather than a narrower technical field of “evaluating charities like investments” as perhaps envisioned when Givewell was originally founded.
The interest in big philosophical questions about humanity and civilization, and the fact that imaginative speculation is acceptable & encouraged.
The ideas of effective altruism are often developed through writing and debate and communication, rather than primarily through experiment or observation in a formal sense. The overall style of being “interdisciplinary” and generalist in its approach to many problems.
(My personal background: I am an aerospace engineer by trade, although I have a naturally generalist personality and most of my posts on the Forum to date have been weird creative-writing type things rather than hard-nosed technical analyses. My gut reaction to this post was negative, but on rereading the post I think my reaction was unjustified; I was just projecting the fears that I listed above onto areas of the post that were vague.)
I guess my conclusion is that I am psyched about #3 and #4 -- it’s always good to be welcoming and creative in helping people find ways to contribute their unique talents. And then, past that… there are a bunch of tricky fundamental movement-growth issues that I am really confused about!
(Meta: I am afraid that I am strawmaning your position because I do not understand it correctly, so please let me know if that is the case )
I agree with you that it seems plausible that the unique thinking style of EA has been essential to a lot of the successes achieved by EA + that those are closely related to STEM fields.
But it is unclear to me why this should imply that #1 is wrong. EA wants to achieve this massive goal of doing the most good. This makes it very important to get a highly accurate map of the territory we are operating in. Taking that into account, it is a very strong claim that we are confident that the “core” thinking tools we have used so far are the best we could be using and that we do not need to look at the tools that other fields are using before we decide that ours are actually the best. This is especially true since we do lack a bunch of academic disciplines in EA. Most EA ideas and thinking tools are from western analytic philosophy and STEM research. And that does not mean they are wrong—it could be that they all turn out to be correct—but they do encompass only a small portion of all knowledge out there. I dare you to chat to a philosopher who researches non-western epistemology—your mind will be blown by how different it is.
More generally: The fact that it is sometimes hard to understand people from very different fields is why it is so incredibly important and valuable to try to get those people into EA. They usually view the world through a very different lens and can check whether they see an aspect of the territory we do not see that we should incorporate into EA.
I am afraid that we are so confident in the tools we have that we do not spend enough time trying to understand how other fields think and therefore miss out on an important part of reality.
To be clear: I think that a big chunk of what makes EA special is related to STEM style reasoning and we should probably try hard to hold onto it.
Small thing: It is unclear to me whether we get a lot of false positives + this was also not the claim of the post if I understand it correctly.