I think it’s great that you’re asking for support rather than facing existential anxiety alone, and I’m sorry that you don’t seem to have people in your life who will take your worries seriously and talk through them with you. And I’m sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we’re here for reasons other than AI, then we probably don’t mind talk of doom or are at least too conflict-averse to continually barge into others’ AI discussions.
So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you’re in doubt and perseverating on thoughts of Hell. You don’t have to listen to any of us, and you wouldn’t have to even if we really were all smarter than you.
You point out the XPT forecasts. I think that’s a great place to start. It’s hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don’t think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don’t know anything I don’t. They mostly find different lines of argument more or less persuasive.
I don’t have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren’t required responses to any beliefs about existential risk.
I’m glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)
“would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you’re in doubt and perseverating on thoughts of Hell. You don’t have to listen to any of us, and you wouldn’t have to even if we really were all smarter than you.”
I think it’s great that you’re asking for support rather than facing existential anxiety alone, and I’m sorry that you don’t seem to have people in your life who will take your worries seriously and talk through them with you. And I’m sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we’re here for reasons other than AI, then we probably don’t mind talk of doom or are at least too conflict-averse to continually barge into others’ AI discussions.
So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you’re in doubt and perseverating on thoughts of Hell. You don’t have to listen to any of us, and you wouldn’t have to even if we really were all smarter than you.
You point out the XPT forecasts. I think that’s a great place to start. It’s hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don’t think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don’t know anything I don’t. They mostly find different lines of argument more or less persuasive.
I don’t have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren’t required responses to any beliefs about existential risk.
I’m glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)
“would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you’re in doubt and perseverating on thoughts of Hell. You don’t have to listen to any of us, and you wouldn’t have to even if we really were all smarter than you.”
This is #wisdom love it.