I believe you left another important reason why it’s okay not to go into AI: because it’s okay to think that the risk of AI is wildly overblown. I’m worried that EA might be unwittingly drifting into a community where AI skeptics feel unwelcome and just leave (or never join in the first place), which is obviously bad for intellectual discourse, even if you think they are wrong.
As someone who is still in the process of developing a technical grasp around AI, yeah I honestly am a bit overwhelmed sometimes by the degree of focus on AI stuff (at least in my college age EA social circles) over bio and nuclear security and would love to deep-dive more into those areas but it seems like AI Safety where most of the visible opportunities are (at least for now...).
I’m also wary of cargo-culting some of the AI risk arguments to newcomers as a community-builder when I don’t necessarily understand everything myself from the ground up.
Yes! Also I suspect that people who think that AI is by far the most important problem might be more concentrated in the san Francisco bay area, compared to other cities with a lot of effective altruists, like London. Personally I think we probably already have enough people working on AI but I was worried about getting downvoted if i put that in my original post, so I scoped it down to something I thought everybody could get on board with (that people shouldn’t feel bad about not working on AI)
I wonder how many other people are avoiding discussing their true beliefs about AI for similar reasons? I definitely don’t judge anyone for doing so, there’s a lot of subtle discouragements for disagreeing with an in-group consensus, even if none of it is deliberate or conscious. You might feel that people will judge you as dumb for not understanding their arguments, or not be receptive to your other points, or have the natural urge to not get into a debate when you are outnumbered, or just want to fit in/be popular.
I believe you left another important reason why it’s okay not to go into AI: because it’s okay to think that the risk of AI is wildly overblown. I’m worried that EA might be unwittingly drifting into a community where AI skeptics feel unwelcome and just leave (or never join in the first place), which is obviously bad for intellectual discourse, even if you think they are wrong.
As someone who is still in the process of developing a technical grasp around AI, yeah I honestly am a bit overwhelmed sometimes by the degree of focus on AI stuff (at least in my college age EA social circles) over bio and nuclear security and would love to deep-dive more into those areas but it seems like AI Safety where most of the visible opportunities are (at least for now...).
I’m also wary of cargo-culting some of the AI risk arguments to newcomers as a community-builder when I don’t necessarily understand everything myself from the ground up.
Yes! Also I suspect that people who think that AI is by far the most important problem might be more concentrated in the san Francisco bay area, compared to other cities with a lot of effective altruists, like London. Personally I think we probably already have enough people working on AI but I was worried about getting downvoted if i put that in my original post, so I scoped it down to something I thought everybody could get on board with (that people shouldn’t feel bad about not working on AI)
I wonder how many other people are avoiding discussing their true beliefs about AI for similar reasons? I definitely don’t judge anyone for doing so, there’s a lot of subtle discouragements for disagreeing with an in-group consensus, even if none of it is deliberate or conscious. You might feel that people will judge you as dumb for not understanding their arguments, or not be receptive to your other points, or have the natural urge to not get into a debate when you are outnumbered, or just want to fit in/be popular.