Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)
For example, I think I might make a decent enough AI Safety person and generally agree it could be a good idea, but I’ve explicitly chosen not to pursue it because (among other reasons) I’m pretty sure it would totally fry my nerves. The popularity of that LessWrong post suggests that I’m not alone, and also raises the interesting possibility that such people might end up actively detracting from the efforts of others, rather than just neutrally crashing out.
We don’t have anything written/official on this particular issue I don’t think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related—what to do if you find the case for an issue intellectually compelling but don’t feel motivated by it.
Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)
For example, I think I might make a decent enough AI Safety person and generally agree it could be a good idea, but I’ve explicitly chosen not to pursue it because (among other reasons) I’m pretty sure it would totally fry my nerves. The popularity of that LessWrong post suggests that I’m not alone, and also raises the interesting possibility that such people might end up actively detracting from the efforts of others, rather than just neutrally crashing out.
We don’t have anything written/official on this particular issue I don’t think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related—what to do if you find the case for an issue intellectually compelling but don’t feel motivated by it.