I’ll go ahead and present a short counterpoint. My counterpoint can also be read as a critique of the EA movement in general.
I think EA already has too much of a heart focus and not enough of a head focus. The aid of yesteryear was often good intentions gone awry: it undercut local producers or went to corrupt government leaders and sometimes exacerbated the problems it was trying to solve. Although EA may have solved these specific problems, I’m not convinced it has solved the more general problem of the road to hell often being paved with good intentions.
To solve this broader problem, one could get better at predicting things, e.g. by reading the recent Superforecasting book. The fact that the EA community has shown only a little interest in the book suggests to me that we do not have enough of a head focus and our road likely leads to hell just like many others.
Superforecastingrecommends “[pursuing] relentlessly every bone of contention in order to prevent errors arising from too superficial an analysis of the issues”, devil’s advocacy, and constant disagreements. This kind of culture is not for everyone, and that’s fine. But by recruiting people who are only interested in EA when it’s pitched in a heart-friendly way, we risk weakening our already-weak forecasting culture. We are already emphasizing self-sacrifice and movement growth too much at the expensive of productive disagreements and effective analysis, in my view.
Effective altruists are very keen on Superforecasters. The project behind the book was funded by Jason Matheny (who’s mentioned in the book), who attended EAG:SF. We also mentioned the Good Judgement Project on the 80k blog almost two years ago.
https://80000hours.org/2014/01/which-cause-is-most-effective-300/
I agree with your key point though that this could be a tough tradeoff.
Thanks for sharing your perspective. I really like Superforecasting myself, and actually as part of the Intentional Insights project, one of our members conveys a popularized version of this strategy to a broad audience in this blog.
However, I am concerned that following the path of excessive analysis before acting may fall prey to information bias, namely seeking too much information before acting. As the Lean Startup methodology approach suggests, we should experiment and learn from evidence, and then go on to do better, not simply sit and debate. And Superforecasting itself emphasizes that the only way to get better at forecasting is to learn from previous forecasts and then go on to do better in the future.
For example, the whole point of this emotionally-oriented approach is to do better than we did before in reaching more people and helping them become more effective donors.
I’m definitely in favor of doing experiments, learning by doing, etc. But no amount of these is going to save you from working towards the wrong goal (unless you’re doing experiments to try to figure out if the goal is one that’s good to work towards, which doesn’t sound quite like what you’re proposing here, although I guess your post could be interpreted this way).
The best counterargument I can think of to my position:
EA must either grow or die. Suppose the idea that one should donate a significant fraction of one’s income makes many people uncomfortable, and their natural response is to find some rationalization for why EA is bad. Then in the absence of attaining influence to counteract this, EA’s image will decline continuously as people broadcast these rationalizations.
The only solution is to grow and attain influence before we get killed, i.e. nudge journalism towards our values faster than journalists nudge us towards their values. (#1 journalist value: pageviews.)
I’d say the solution would be to persuade certain people, specifically ones more emotionally inclined than the current typical EA, to be more oriented toward our values. This is a much smaller ask :-)
I’ll go ahead and present a short counterpoint. My counterpoint can also be read as a critique of the EA movement in general.
I think EA already has too much of a heart focus and not enough of a head focus. The aid of yesteryear was often good intentions gone awry: it undercut local producers or went to corrupt government leaders and sometimes exacerbated the problems it was trying to solve. Although EA may have solved these specific problems, I’m not convinced it has solved the more general problem of the road to hell often being paved with good intentions.
To solve this broader problem, one could get better at predicting things, e.g. by reading the recent Superforecasting book. The fact that the EA community has shown only a little interest in the book suggests to me that we do not have enough of a head focus and our road likely leads to hell just like many others.
Superforecasting recommends “[pursuing] relentlessly every bone of contention in order to prevent errors arising from too superficial an analysis of the issues”, devil’s advocacy, and constant disagreements. This kind of culture is not for everyone, and that’s fine. But by recruiting people who are only interested in EA when it’s pitched in a heart-friendly way, we risk weakening our already-weak forecasting culture. We are already emphasizing self-sacrifice and movement growth too much at the expensive of productive disagreements and effective analysis, in my view.
Effective altruists are very keen on Superforecasters. The project behind the book was funded by Jason Matheny (who’s mentioned in the book), who attended EAG:SF. We also mentioned the Good Judgement Project on the 80k blog almost two years ago. https://80000hours.org/2014/01/which-cause-is-most-effective-300/
I agree with your key point though that this could be a tough tradeoff.
Thanks for sharing your perspective. I really like Superforecasting myself, and actually as part of the Intentional Insights project, one of our members conveys a popularized version of this strategy to a broad audience in this blog.
However, I am concerned that following the path of excessive analysis before acting may fall prey to information bias, namely seeking too much information before acting. As the Lean Startup methodology approach suggests, we should experiment and learn from evidence, and then go on to do better, not simply sit and debate. And Superforecasting itself emphasizes that the only way to get better at forecasting is to learn from previous forecasts and then go on to do better in the future.
For example, the whole point of this emotionally-oriented approach is to do better than we did before in reaching more people and helping them become more effective donors.
I’m definitely in favor of doing experiments, learning by doing, etc. But no amount of these is going to save you from working towards the wrong goal (unless you’re doing experiments to try to figure out if the goal is one that’s good to work towards, which doesn’t sound quite like what you’re proposing here, although I guess your post could be interpreted this way).
The best counterargument I can think of to my position:
EA must either grow or die. Suppose the idea that one should donate a significant fraction of one’s income makes many people uncomfortable, and their natural response is to find some rationalization for why EA is bad. Then in the absence of attaining influence to counteract this, EA’s image will decline continuously as people broadcast these rationalizations.
The only solution is to grow and attain influence before we get killed, i.e. nudge journalism towards our values faster than journalists nudge us towards their values. (#1 journalist value: pageviews.)
I’d say the solution would be to persuade certain people, specifically ones more emotionally inclined than the current typical EA, to be more oriented toward our values. This is a much smaller ask :-)