I think this is fairly bad advice—LessWrong commenters are wrong about a lot of things. I think this is an acceptable way to get a vibe for the what the LessWrong bubble thinks though. But idk, for most of these questions the hard part is figuring out which bubble to believe. Most orgs will have some groups think they’re useless, some think they’re great, and probably some who think they’re net negative. Finding one bubble who believes one of these three doesn’t tell you much!
I personally interpret Neel’s comment as saying this is ~not better (perhaps worse) than going in blindly. So I just wanted to highlight that a better alternative is not needed for the sake of arguing this (even if it’s a good idea to have one for the sake of future AI researchers).
Do you think that going to do capabilities work at DeepMind or OpenAI is just as impactful as going to whatever the lesswrong community recommends (as presented by their comments and upvotes) ?
My meta-opinion is that it would be better to see what others think about working on capabilities in top labs, compared to going there without even considering the downsides. What do you think? (A)
And also that before working at “AI safety groups which are usually lauded [but] are actually net negative”, it would be better to read comments of people like you. What do you think? (B)
(A) Sure, it’d be good to have opinions from relevant people, but on the other hand it’s non-trivial to figure out who “relevant people” are, and “the general opinion on LW” is probably not the right category. I’d look more at what (1) people actually working in the field, and (2) the broad ML community, think about an org. So maybe the Alignment Forum.
(B) I can only answer on my specific views. My opinion on [MIRI] probably wouldn’t really help individuals seeking to work there, since they probably know everything I know and have their own opinions. My opinions are more suitable for discussions on the general AI safety community culture.
By the way, I personally resonate with your advice on forming an inside view and am taking that path, but it doesn’t fit everyone. Some people don’t want all that homework, they want to get in a company and write code, and, to be clear, it is common for them to apply to all orgs that [they see their names in EA spaces] or something like that (very wide, many orgs). This is the target audience I’m trying to help.
I would just probably tell people to work in another field than explicitly encouraging goodharting their way to trying to having positive impact in an area with extreme variance.
I think this is fairly bad advice—LessWrong commenters are wrong about a lot of things. I think this is an acceptable way to get a vibe for the what the LessWrong bubble thinks though. But idk, for most of these questions the hard part is figuring out which bubble to believe. Most orgs will have some groups think they’re useless, some think they’re great, and probably some who think they’re net negative. Finding one bubble who believes one of these three doesn’t tell you much!
Thanks for the pushback!
Do you have an alternative suggestion?
I personally interpret Neel’s comment as saying this is ~not better (perhaps worse) than going in blindly. So I just wanted to highlight that a better alternative is not needed for the sake of arguing this (even if it’s a good idea to have one for the sake of future AI researchers).
Do you think that going to do capabilities work at DeepMind or OpenAI is just as impactful as going to whatever the lesswrong community recommends (as presented by their comments and upvotes) ?
Possibly. As we’ve discussed privately, I think some AI safety groups which are usually lauded are actually net negative 🙃
But I was trying to interpret Neel and not give my own opinion.
My meta-opinion is that it would be better to see what others think about working on capabilities in top labs, compared to going there without even considering the downsides. What do you think? (A)
And also that before working at “AI safety groups which are usually lauded [but] are actually net negative”, it would be better to read comments of people like you. What do you think? (B)
I somewhat disagree with both statements.
(A) Sure, it’d be good to have opinions from relevant people, but on the other hand it’s non-trivial to figure out who “relevant people” are, and “the general opinion on LW” is probably not the right category. I’d look more at what (1) people actually working in the field, and (2) the broad ML community, think about an org. So maybe the Alignment Forum.
(B) I can only answer on my specific views. My opinion on [MIRI] probably wouldn’t really help individuals seeking to work there, since they probably know everything I know and have their own opinions. My opinions are more suitable for discussions on the general AI safety community culture.
By the way, I personally resonate with your advice on forming an inside view and am taking that path, but it doesn’t fit everyone. Some people don’t want all that homework, they want to get in a company and write code, and, to be clear, it is common for them to apply to all orgs that [they see their names in EA spaces] or something like that (very wide, many orgs). This is the target audience I’m trying to help.
I would just probably tell people to work in another field than explicitly encouraging goodharting their way to trying to having positive impact in an area with extreme variance.