I am a psychotherapist, and I help people working on AI safety. I noticed patterns of mental health issues highly specific to this group. It’s not just doomerism, there are way more of them that are less obvious.
If you struggle with a mental health issue related to AI safety, feel free to leave a comment about it and about things that help you with it. You might also support others in the comments. Sometimes such support makes a lot of difference and people feel like they are not alone.
All the examples in this post are anonymized and changed in a way that it’s impossible to recognize a specific person behind them.
AI safety is a rather unusual field
The problems described in this post arise because AI safety is not an ordinary field to work in.
Many people within the AI safety community believe that it might be the most important field of work, but the general public mostly doesn’t care that much. Also, the field itself is extremely competitive and newcomers often have hard time getting a job.
No one really knows when we will create AGI, and whether we will be able to keep it aligned. If we fail to align AGI, the humanity might go extinct, and even if we succeed, it will radically transform the world.
Patterns
AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself “Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless.”
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it’s very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of the existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn’t have meaning.
Check out this post with in-depth exploration of Alex’s meaningless and ways to combat it.
I don’t know when we will create AGI and if we will be able to align it, so I feel like I have no control over it.
Bella is an anxious person, and she recently got interested in AI safety and she realized that nobody know for sure how to align AGI.
She feels that AGI might pose an extreme danger, and there is nothing she can do. She even can’t understand how much time do we have. A year? Five years? This uncertainty makes here even more anxious. And what if the takeoff will be so rapid that no one will understand what is going on?
Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn’t help, and only makes Bella more anxious. She feels like even her therapist doesn’t understand her.
AI safety is a big part of my life, but others don’t care that much about it. I feel alienated.
Chang is an ML scientist working on mechanistic interpretability in AI lab. AI safety consumed all his life and became a part of his identity. He constantly checks AI safety influencers on Twitter, he spends a lot of time reading LessWrong and watching AI podcasts. He even made a tatoo of a paperclip.
Chang lives outside of major AI safety hubs, and he feels a bit lonely because there is no one to talk about AI safety in person.
Recently he attended his aunt’s birthday party. He talked about alignment with his family. They were a bit curious about the topic, but didn’t care that much. Chang feels like they just don’t get it.
Working on AI safety is so important that I neglected other parts of my life and burned-out.
Dmitry is an undergrad student. He believes that AI safety is the most important thing in his life. He either thinks about AI safety or works on it all the time. He has never worked this hard in his life, and it’s hard for him to realize that neglect of other parts of life and failure to compartmentalize AI safety is a straight way to a burnout. When this burnout happens, at first, he doean’t understand what has happened and became depressed because he can’t work on AI safety.
People working on AI safety are extremely smart. I don’t think I am good enough to meaningfully contribute.
Ezra recently graduated from a university where he did research on transformers. He wants to work on AI safety, but it seems like everyone in major AI labs and AI safety orgs are extremely talented, and have an exceptional education. Ezra feels so intimidated by this, that it’s hard for him to even try doing something.
After a while he finally applies to a number of orgs, but he gets rejected everywhere, and other people share their similar experience. It seems like there are dozens of smart and young people applying for each position.
He feels demotivated, and he also need to pay his bills, so he decides to work in a non-AI safety company which makes him sad.
So many smart people think that AI alignment is not that big of a problem. Maybe I just overreacting?
Francesca is a computer scientist working in academia. She is familiar with machine learning, but it’s not the focus of her work. She also believes that arguments for existential risks are solid, and she worries about it.
Francesca is curious what top ML scientists think about AI safety. Some of them believe that x-risks are serious, while many others don’t worry about them that much, and they think of AI doomers as weirdos.
Francesca feel confused because of that. She still thinks that arguments for existential risk are solid, but social pressure sometimes makes her think that this whole alignment problem might not be that serious.
6 non-obvious mental health issues specific to AI safety
Intro
I am a psychotherapist, and I help people working on AI safety. I noticed patterns of mental health issues highly specific to this group. It’s not just doomerism, there are way more of them that are less obvious.
If you struggle with a mental health issue related to AI safety, feel free to leave a comment about it and about things that help you with it. You might also support others in the comments. Sometimes such support makes a lot of difference and people feel like they are not alone.
All the examples in this post are anonymized and changed in a way that it’s impossible to recognize a specific person behind them.
AI safety is a rather unusual field
The problems described in this post arise because AI safety is not an ordinary field to work in.
Many people within the AI safety community believe that it might be the most important field of work, but the general public mostly doesn’t care that much. Also, the field itself is extremely competitive and newcomers often have hard time getting a job.
No one really knows when we will create AGI, and whether we will be able to keep it aligned. If we fail to align AGI, the humanity might go extinct, and even if we succeed, it will radically transform the world.
Patterns
AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself “Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless.”
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it’s very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of the existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn’t have meaning.
Check out this post with in-depth exploration of Alex’s meaningless and ways to combat it.
I don’t know when we will create AGI and if we will be able to align it, so I feel like I have no control over it.
Bella is an anxious person, and she recently got interested in AI safety and she realized that nobody know for sure how to align AGI.
She feels that AGI might pose an extreme danger, and there is nothing she can do. She even can’t understand how much time do we have. A year? Five years? This uncertainty makes here even more anxious. And what if the takeoff will be so rapid that no one will understand what is going on?
Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn’t help, and only makes Bella more anxious. She feels like even her therapist doesn’t understand her.
AI safety is a big part of my life, but others don’t care that much about it. I feel alienated.
Chang is an ML scientist working on mechanistic interpretability in AI lab. AI safety consumed all his life and became a part of his identity. He constantly checks AI safety influencers on Twitter, he spends a lot of time reading LessWrong and watching AI podcasts. He even made a tatoo of a paperclip.
Chang lives outside of major AI safety hubs, and he feels a bit lonely because there is no one to talk about AI safety in person.
Recently he attended his aunt’s birthday party. He talked about alignment with his family. They were a bit curious about the topic, but didn’t care that much. Chang feels like they just don’t get it.
Working on AI safety is so important that I neglected other parts of my life and burned-out.
Dmitry is an undergrad student. He believes that AI safety is the most important thing in his life. He either thinks about AI safety or works on it all the time. He has never worked this hard in his life, and it’s hard for him to realize that neglect of other parts of life and failure to compartmentalize AI safety is a straight way to a burnout. When this burnout happens, at first, he doean’t understand what has happened and became depressed because he can’t work on AI safety.
People working on AI safety are extremely smart. I don’t think I am good enough to meaningfully contribute.
Ezra recently graduated from a university where he did research on transformers. He wants to work on AI safety, but it seems like everyone in major AI labs and AI safety orgs are extremely talented, and have an exceptional education. Ezra feels so intimidated by this, that it’s hard for him to even try doing something.
After a while he finally applies to a number of orgs, but he gets rejected everywhere, and other people share their similar experience. It seems like there are dozens of smart and young people applying for each position.
He feels demotivated, and he also need to pay his bills, so he decides to work in a non-AI safety company which makes him sad.
So many smart people think that AI alignment is not that big of a problem. Maybe I just overreacting?
Francesca is a computer scientist working in academia. She is familiar with machine learning, but it’s not the focus of her work. She also believes that arguments for existential risks are solid, and she worries about it.
Francesca is curious what top ML scientists think about AI safety. Some of them believe that x-risks are serious, while many others don’t worry about them that much, and they think of AI doomers as weirdos.
Francesca feel confused because of that. She still thinks that arguments for existential risk are solid, but social pressure sometimes makes her think that this whole alignment problem might not be that serious.