In any case, I think it’s clear that AI Safety is no longer ‘neglected’ within EA, and possibly outside of it.
I think this can’t be clear based only on observing lots of people at EAG are into it. You have to include some kind of independent evaluation of how much attention the area “should” have. For example, if you believed that AI alignment should receive as much attention as climate change, then EAG being fully 100% about AI would still not be enough to make it no longer neglected.
(Maybe you implicitly do have a model of this, but then I’d like to hear more about it.)
FWIW I’m not sure what my model is, but it involves the fact that despite many people being interested in the field, the number actually working on it full time still seems kind of small, and in particular still dramatically smaller than the number of people working on advancing AI tech.
I think this can’t be clear based only on observing lots of people at EAG are into it. You have to include some kind of independent evaluation of how much attention the area “should” have. For example, if you believed that AI alignment should receive as much attention as climate change, then EAG being fully 100% about AI would still not be enough to make it no longer neglected.
(Maybe you implicitly do have a model of this, but then I’d like to hear more about it.)
FWIW I’m not sure what my model is, but it involves the fact that despite many people being interested in the field, the number actually working on it full time still seems kind of small, and in particular still dramatically smaller than the number of people working on advancing AI tech.