I think these are fair points, I agree the info hazard stuff has smothered a lot of talent development and field building, and I agree the case for x-risk from misaligned advanced AI is more compelling. At the same time, I don’t talk to a lot of EAs and people in the broader ecosystem these days who are laser focused on extinction over GCR, that seems like a small subset of the community. So I expect various social effects, making a bunch more money, and AI being really cool and interesting and fast-moving are probably a bigger deal than x-risk compellingness simpliciter. Or at least they have had a bigger effect on my choices!
But insufficiently successful talent development / salience / comms is probably the biggest thing, I agree.
I think these are fair points, I agree the info hazard stuff has smothered a lot of talent development and field building, and I agree the case for x-risk from misaligned advanced AI is more compelling. At the same time, I don’t talk to a lot of EAs and people in the broader ecosystem these days who are laser focused on extinction over GCR, that seems like a small subset of the community. So I expect various social effects, making a bunch more money, and AI being really cool and interesting and fast-moving are probably a bigger deal than x-risk compellingness simpliciter. Or at least they have had a bigger effect on my choices!
But insufficiently successful talent development / salience / comms is probably the biggest thing, I agree.