I’m not really an EA, but EA-adjacent. I am quite concerned about AI safety, and think it’s probably the most important problem we’re dealing with right now.
It sounds like your post is trying to point out some general issues in EA university groups, and you do point out specific dynamics that one can reasonably be concerned about. It does seem, however, like you do have an issue with the predominance of concerns around AI that is separate from this issue and that strongly shines through in the post. I find this dilutes your message and it might be better separated from the rest of your post.
To counter this, I’m also worried about AI safety despite having mostly withdrawn from EA, but I think the EA focus and discussion on AI safety is weird and bad, and people in EA get sold on specific ideas way too easily. Some examples for ideas that are common but I believe to be very shoddy: “most important century”, “automatic doom from AGI”, “AGI is likely to be developed in the next decade”, “AGI would create superintelligence”.
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven’t cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to ‘think outside the box’ and reason about themselves—but since we ourselves can do it, there’s no reason a machine couldn’t. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.
You’re not really countering me! It’s very easy to imagine that group dynamics like this get out of hand, and people tend repeat certain talking points without due consideration. But if your problem is bad discourse around an issue, it would be better to present that separately from your personal opinions on the issue itself.
I don’t think the two issues are separate. The bad dynamics and discourse in EA are heavily intertwined with the ubiquity of weakly supported but widely held ideas, many of which fuel the AI safety focus of the community. The subgroups of the community where these dynamics are worst are exactly those where AI safety as a cause area is the most popular.
I’m not really an EA, but EA-adjacent. I am quite concerned about AI safety, and think it’s probably the most important problem we’re dealing with right now.
It sounds like your post is trying to point out some general issues in EA university groups, and you do point out specific dynamics that one can reasonably be concerned about. It does seem, however, like you do have an issue with the predominance of concerns around AI that is separate from this issue and that strongly shines through in the post. I find this dilutes your message and it might be better separated from the rest of your post.
To counter this, I’m also worried about AI safety despite having mostly withdrawn from EA, but I think the EA focus and discussion on AI safety is weird and bad, and people in EA get sold on specific ideas way too easily. Some examples for ideas that are common but I believe to be very shoddy: “most important century”, “automatic doom from AGI”, “AGI is likely to be developed in the next decade”, “AGI would create superintelligence”.
What are your reasons for being worried?
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven’t cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to ‘think outside the box’ and reason about themselves—but since we ourselves can do it, there’s no reason a machine couldn’t. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.
You’re not really countering me! It’s very easy to imagine that group dynamics like this get out of hand, and people tend repeat certain talking points without due consideration. But if your problem is bad discourse around an issue, it would be better to present that separately from your personal opinions on the issue itself.
I don’t think the two issues are separate. The bad dynamics and discourse in EA are heavily intertwined with the ubiquity of weakly supported but widely held ideas, many of which fuel the AI safety focus of the community. The subgroups of the community where these dynamics are worst are exactly those where AI safety as a cause area is the most popular.