Do you think that going to do capabilities work at DeepMind or OpenAI is just as impactful as going to whatever the lesswrong community recommends (as presented by their comments and upvotes) ?
My meta-opinion is that it would be better to see what others think about working on capabilities in top labs, compared to going there without even considering the downsides. What do you think? (A)
And also that before working at “AI safety groups which are usually lauded [but] are actually net negative”, it would be better to read comments of people like you. What do you think? (B)
(A) Sure, it’d be good to have opinions from relevant people, but on the other hand it’s non-trivial to figure out who “relevant people” are, and “the general opinion on LW” is probably not the right category. I’d look more at what (1) people actually working in the field, and (2) the broad ML community, think about an org. So maybe the Alignment Forum.
(B) I can only answer on my specific views. My opinion on [MIRI] probably wouldn’t really help individuals seeking to work there, since they probably know everything I know and have their own opinions. My opinions are more suitable for discussions on the general AI safety community culture.
Do you think that going to do capabilities work at DeepMind or OpenAI is just as impactful as going to whatever the lesswrong community recommends (as presented by their comments and upvotes) ?
Possibly. As we’ve discussed privately, I think some AI safety groups which are usually lauded are actually net negative 🙃
But I was trying to interpret Neel and not give my own opinion.
My meta-opinion is that it would be better to see what others think about working on capabilities in top labs, compared to going there without even considering the downsides. What do you think? (A)
And also that before working at “AI safety groups which are usually lauded [but] are actually net negative”, it would be better to read comments of people like you. What do you think? (B)
I somewhat disagree with both statements.
(A) Sure, it’d be good to have opinions from relevant people, but on the other hand it’s non-trivial to figure out who “relevant people” are, and “the general opinion on LW” is probably not the right category. I’d look more at what (1) people actually working in the field, and (2) the broad ML community, think about an org. So maybe the Alignment Forum.
(B) I can only answer on my specific views. My opinion on [MIRI] probably wouldn’t really help individuals seeking to work there, since they probably know everything I know and have their own opinions. My opinions are more suitable for discussions on the general AI safety community culture.