I haven’t had a chance to read it all yet, so there is a chance this point is covered in the post, but I think more EA influence vs non-EA influence (let’s call them “normies”) within AI Safety could actually be bad.
For example, a year ago the majority of normies would have seen Lab jobs being promoted in EA spaces and gone “nah that’s off”, but most of EA needed more nuance and time to think about it.
This is a combo of style of thinking / level of conscientiousness that I think we need less of and leads me to think my initial point (there are other dimensions around which kind of normies I think we should target within AI Safety but that’s it’s own thing altogether).
Thanks for writing this :)
I haven’t had a chance to read it all yet, so there is a chance this point is covered in the post, but I think more EA influence vs non-EA influence (let’s call them “normies”) within AI Safety could actually be bad.
For example, a year ago the majority of normies would have seen Lab jobs being promoted in EA spaces and gone “nah that’s off”, but most of EA needed more nuance and time to think about it.
This is a combo of style of thinking / level of conscientiousness that I think we need less of and leads me to think my initial point (there are other dimensions around which kind of normies I think we should target within AI Safety but that’s it’s own thing altogether).