I think that EA as it exists today doesn’t provide much value. It focuses mostly on things that are obvious today (‘malaria is bad’), providing people a slightly better way to do what they already think is a good idea, rather than making bets on high-impact large-scale interventions. It also places too much emphasis on alleviating suffering, to the exclusion of Kantian, contractarian, etc. conceptions of ethical obligation.
(By this I primarily have in mind that too many EAs are working on changing the subjective experience of chickens and crickets in a particular direction, on the assumption that qualia/subjectivity is a relatively natural kind, that it exhibits commensurate valences across different species, and that these valences track moral importance very closely. It strikes me as more plausible that morality as we know it is, loosely speaking, a human thing—a phenomenon that’s grounded in our brain’s motivational systems and directed at achieving cooperate-cooperate equilibria between intelligent agents simulating one another. Since crickets aren’t sophisticated enough to form good mental models of humans (or even of other crickets), they just aren’t the kinds of physical systems that are likely to be objects of much moral concern, if any. I obviously don’t expect all EAs to agree with me on any of these points, but I think far too many EAs rigidly adhere to the same unquestioned views on moral theory, which would be bad enough even if those views were likely to be true.)
The only EA movement-building organization that strikes me as useful for long-run considerations is 80,000 Hours. GiveWell deliberately avoids the kinds of interventions and organizations that are likely to be useful, and Good Ventures doesn’t strike me as willing to explore hard enough to do anything interesting. More generally, I feel like a lot of skilled people are now wasting their time on EA (e.g., Oliver Habryka), many of whom would otherwise be working on issues more directly related to AGI.
What I’d like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities—doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months’ investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.
I have spoken with two people in the community who felt they didn’t have anyone to turn to who would not throw rationalist type techniques at them when they were experiencing mental health problems. The fix it attitude is fairly toxic for many common situations.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
I strongly agree with this, and I hadn’t heard anyone articulate it quite this explicitly—thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn’t always “use this CFAR technique.”
(I think CFAR are great and a lot of their techniques are really useful. But I’ve also spent a bunch of time feeling bad the fact that I don’t seem able to learn and implement these techniques in the way many other people seem to, and it’s taken me a long time to realise that trying to ‘figure out’ how to fix my problems in a very analytical way is very often not what I need.)
What I’d like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities—doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months’ investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.
I’d be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I’ve thought a lot about how we can support each other. I’m not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.
Speculation ahoy:
1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.
2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).
3) I wonder how much good work could be done on anyone’s mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I’ve never tried something like this before, but I’d eventually like to.
Well, writing that comment was a journey. I doubt I’ll stand by all of what I’ve written here tomorrow morning, but I do think that I’m correct on some points, and that I’m pointing in a few valuable directions.
I’m so intrigued by proposal 3)!
I think when a friend is struggling like that I often have a vague feeling of wanting to engage/help in a bigger way than having a few chats about it, and I’m intrigued by this idea of how to do that.
And also thinking about myself I think I’d love it if someone did that for me.
I’m gonna keep that in mind and maybe try it one day!
I think I would find this super helpful. low-level mental health stuff has contributed to me basically muddling around for years, nowhere near making good on what I could (in my best attempt at probably faulty self assessment) potentially learn and contribute.
Anonymous #4:
I have spoken with two people in the community who felt they didn’t have anyone to turn to who would not throw rationalist type techniques at them when they were experiencing mental health problems. The fix it attitude is fairly toxic for many common situations.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
I strongly agree with this, and I hadn’t heard anyone articulate it quite this explicitly—thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn’t always “use this CFAR technique.”
(I think CFAR are great and a lot of their techniques are really useful. But I’ve also spent a bunch of time feeling bad the fact that I don’t seem able to learn and implement these techniques in the way many other people seem to, and it’s taken me a long time to realise that trying to ‘figure out’ how to fix my problems in a very analytical way is very often not what I need.)
I’d be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I’ve thought a lot about how we can support each other. I’m not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.
Speculation ahoy:
1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.
2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).
3) I wonder how much good work could be done on anyone’s mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I’ve never tried something like this before, but I’d eventually like to.
Well, writing that comment was a journey. I doubt I’ll stand by all of what I’ve written here tomorrow morning, but I do think that I’m correct on some points, and that I’m pointing in a few valuable directions.
I’m so intrigued by proposal 3)! I think when a friend is struggling like that I often have a vague feeling of wanting to engage/help in a bigger way than having a few chats about it, and I’m intrigued by this idea of how to do that. And also thinking about myself I think I’d love it if someone did that for me. I’m gonna keep that in mind and maybe try it one day!
I think I would find this super helpful. low-level mental health stuff has contributed to me basically muddling around for years, nowhere near making good on what I could (in my best attempt at probably faulty self assessment) potentially learn and contribute.