I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don’t think the anecdote you chose highlights much of a tension.
> I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.
I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going to help them be more productive (I do also care about EAs being happy as much as I care about anyone being happy, but I expect that value produced from this to be much less that the value produced from the EAs actions).
I care much less about the productivity benefits that we’d see from improving the mental health of people outside of the EA community (although of course I do think their mental health matters for other reasons).
So the above claim seems pretty reasonable to me.
As an illustration, I can care about EAs having good laptops much more than I care about random people having good laptops, I am much more sceptical about giving random people good laptops producing impact than giving EAs good laptops.
Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.
If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).
Has anyone done the analysis to determine the most cost-effective ways to increase the productivity of the EA community? It’s not obvious to me that focussing on mental health would be the best option. If that is the case, I feel confused about the rationale for prioritising the mental health of EAs over other productivity interventions.
I don’t know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn’t have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.
It’s worth noting that it’s plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they’re having impact etc. - which may in turn require novel resources/approaches to treating mental health problems that aren’t necessarily widely available elsewhere.
Other things like how best to work productively may be well covered by existing resources and so may not need EA-specific materials.
Hmm, not sure you’ve spotted the tension. The tension arises from recognising that X is a problem in your social world, but not then incorporating that thought into your cause prioritisation thinking. This is the puzzling phenomenon I’ve observed re mental health.
Of course, if someone has—as you have—both recognised the problem in their social world, and then also considered whether it is a global priority, then they’ve connected their thinking in the way I hope they would!
FWIW, I think improving the mental health of EAs is plausibly very sensible purely on productivity grounds, but I wasn’t making a claim here about that either way.
I think the point Caleb is making is that your EAG London story doesn’t necessarily show the tension that you think it does. And for what it’s worth I’m sceptical this tension is very widespread.
Like IanDavidMoss says, I think the more interesting phenomenon you mention is the sudden and unnoticed switch to skeptical mode:
I then told them about my work at the Happier Lives Institute comparing the cost-effectiveness of cash transfers to treating depression and how we’d found the latter was about 10x better (original analysis, updated analysis). They suddenly switched to sceptical mode, saying they didn’t believe you could really measure people’s mental health or feelings and that, even if you could, targeting poverty must still be better.
After a couple of minutes of this, I suddenly clocked how weirdly disconnected the first and second parts of the conversation were. I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs. They looked puzzled—the tension seemed never to have occurred to them—and, to their credit, they replied “oh, yeah, hmm, that is weird”.
I think you could be right about this AND that Michael’s anecdote could also be pointing to something true about the idea that personal or proximate experience with a problem could increase the salience of it for people conducting supposedly dispassionate analysis. We shouldn’t pretend that the cognitive biases that apply to everyone else don’t also apply to people in the EA community, even if the manifestation is sometimes more subtle.
Naively I’d have guessed that the “biases clouded by personal experience” angle would cause upper-middle class young Westerners to overrate the global importance of problems that they or people close to them personally experience (e.g. mental health issues, racism) and are shared by other moral patients, rather than overrate the problems that they do not suffer from, but others do (e.g., malaria, being trapped in a battery cage).
I agree, but the situation here is a bit more complex. Michael’s telling a story about someone who was emotionally invested in the problem of improving mental health among EAs, but then “suddenly switched to sceptical mode” as soon as the conversation turned to helping people who were more distant. While in skeptical mode, this person (in Michael’s telling) appeared to be relying on the intellectual judgments of more proximate people in the EA community rather than connecting emotionally to the subjective experience of the more distant people, whether we are talking about malaria or depression. The point is about people selectively choosing to apply “sceptical mode” based on the context.
Maybe, but FWIW, lots of people have the view that mental health is only—or, least, is primarily—a problem for wealthy people. The idea seems to be that only the rich have the luxury to obsess about their emotions, whereas the poor are pretty happy and/or too busy dealing with their other problems to stop and ‘fuss’ about it. I don’t know why they think this, but I’m surprised at how often I encounter this view
I note it’s in some tension with another attitude common, namely that the rich are almost uniformly happy whereas the poor are miserable.
I don’t share this view, and I agree that it is weird. But maybe the feeling behind it is something like: if I, personally, were in extreme poverty I would want people to prioritize getting me material help over mental health help. I imagine I would be kind of baffled and annoyed if some charity was giving me CBT books instead of food or malaria nets.
That’s just a feeling though, and it doesn’t rigorously answer any real cause prioritization question.
I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don’t think the anecdote you chose highlights much of a tension.
> I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.
I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going to help them be more productive (I do also care about EAs being happy as much as I care about anyone being happy, but I expect that value produced from this to be much less that the value produced from the EAs actions).
I care much less about the productivity benefits that we’d see from improving the mental health of people outside of the EA community (although of course I do think their mental health matters for other reasons).
So the above claim seems pretty reasonable to me.
As an illustration, I can care about EAs having good laptops much more than I care about random people having good laptops, I am much more sceptical about giving random people good laptops producing impact than giving EAs good laptops.
Pretty much this. I don’t think discussions on improving mental health in the EA community are motivated by improving wellbeing, but instead by allowing us to be as effective as a community as possible. Poor mental health is a huge drain on productivity.
If the focus on EA community mental health was based on direct wellbeing benefits I would be quite shocked. We’re a fairly small community and it’s likely to be far more cost-effective to improve the mental health of people living in lower income countries (as HLI’s StrongMinds recommendation suggests).
Has anyone done the analysis to determine the most cost-effective ways to increase the productivity of the EA community? It’s not obvious to me that focussing on mental health would be the best option. If that is the case, I feel confused about the rationale for prioritising the mental health of EAs over other productivity interventions.
I don’t know for sure that we have prioritised mental health over other productivity interventions, although we may have. Effective Altruism Coaching doesn’t have a sole mental health focus (also see here for 2020 annual review) but I think that is just one person doing the coaching so may not be representative of wider productivity work in EA.
It’s worth noting that it’s plausible that mental health may be proportionally more of a problem within EA than outside, as EAs may worry more about the state of the world and if they’re having impact etc. - which may in turn require novel resources/approaches to treating mental health problems that aren’t necessarily widely available elsewhere.
Other things like how best to work productively may be well covered by existing resources and so may not need EA-specific materials.
Hmm, not sure you’ve spotted the tension. The tension arises from recognising that X is a problem in your social world, but not then incorporating that thought into your cause prioritisation thinking. This is the puzzling phenomenon I’ve observed re mental health.
Of course, if someone has—as you have—both recognised the problem in their social world, and then also considered whether it is a global priority, then they’ve connected their thinking in the way I hope they would!
FWIW, I think improving the mental health of EAs is plausibly very sensible purely on productivity grounds, but I wasn’t making a claim here about that either way.
I think the point Caleb is making is that your EAG London story doesn’t necessarily show the tension that you think it does. And for what it’s worth I’m sceptical this tension is very widespread.
Like IanDavidMoss says, I think the more interesting phenomenon you mention is the sudden and unnoticed switch to skeptical mode:
I think you could be right about this AND that Michael’s anecdote could also be pointing to something true about the idea that personal or proximate experience with a problem could increase the salience of it for people conducting supposedly dispassionate analysis. We shouldn’t pretend that the cognitive biases that apply to everyone else don’t also apply to people in the EA community, even if the manifestation is sometimes more subtle.
Naively I’d have guessed that the “biases clouded by personal experience” angle would cause upper-middle class young Westerners to overrate the global importance of problems that they or people close to them personally experience (e.g. mental health issues, racism) and are shared by other moral patients, rather than overrate the problems that they do not suffer from, but others do (e.g., malaria, being trapped in a battery cage).
I agree, but the situation here is a bit more complex. Michael’s telling a story about someone who was emotionally invested in the problem of improving mental health among EAs, but then “suddenly switched to sceptical mode” as soon as the conversation turned to helping people who were more distant. While in skeptical mode, this person (in Michael’s telling) appeared to be relying on the intellectual judgments of more proximate people in the EA community rather than connecting emotionally to the subjective experience of the more distant people, whether we are talking about malaria or depression. The point is about people selectively choosing to apply “sceptical mode” based on the context.
Maybe, but FWIW, lots of people have the view that mental health is only—or, least, is primarily—a problem for wealthy people. The idea seems to be that only the rich have the luxury to obsess about their emotions, whereas the poor are pretty happy and/or too busy dealing with their other problems to stop and ‘fuss’ about it. I don’t know why they think this, but I’m surprised at how often I encounter this view
I note it’s in some tension with another attitude common, namely that the rich are almost uniformly happy whereas the poor are miserable.
I agree that many people believe that mental health problems only affect the rich, and that this belief is incorrect.
I don’t share this view, and I agree that it is weird. But maybe the feeling behind it is something like: if I, personally, were in extreme poverty I would want people to prioritize getting me material help over mental health help. I imagine I would be kind of baffled and annoyed if some charity was giving me CBT books instead of food or malaria nets.
That’s just a feeling though, and it doesn’t rigorously answer any real cause prioritization question.