This is an anonymous account.
Omega
Hi Jakub, these are standard rates for EECS PhD students (PhD students in other disciplines get paid less). Here are a couple as an example:
Berkeley EECS PhD students are paid $45K per year at the PhD level. (from personal acquaintances at in the Berkeley EECS program)
MIT EECS PhD students are paid ~$49.2K per year at the PhD level. (source)
Field Experience
Many research scientist roles at AI research labs (e.g. DeepMind and Google Brain[1]) expect researchers to have PhD’s in ML—this would be a minimum of 5 years doing relevant research.
Not all labs have a strict requirement on ML PhD’s. Many people at OpenAI and Anthropic don’t have PhD’s in ML either, but often have PhD’s in related fields like Maths or Physics. There are a decent number of people at OpenAI without PhD’s, (Anthropic is relatively stricter on this than OpenAI). Labs like MIRI don’t require this, but they are doing more conceptual researchly, and relatively little, if any, ML research (to the best of our knowledge, they are private by default).
- ^
Note that while we think for-profit AI labs are not the right reference class for comparing funding, we do think that all AI labs (academic, non-profit or for-profit) are the correct reference class when considering credentials for research scientists.
Hi Fay, Thank you for engaging with the post. We appreciate you taking the time to check the claims we make.
1) Redwood Funding
Regarding OP’s investment in OpenAI—you are correct that OpenAI received a larger amount of money. We didn’t include this because in since the grant in 2017, OpenAI transitioned to a capped for-profit. I (the author of this particular comment) was actually not aware that OpenAI had been at one point a research non-profit at one point. I wil be updating the original post to add this information in—we appreciate you flagging it.
In general, we disagree that the correct reference class for evaluating Redwood’s funding is for-profit alignment labs like OpenAI, Anthropic or DeepMind because they have significantly more funding from (primarily non-EA) investors, and have different core objectives and goals. We think the correct reference class for Redwood is other TAIS labs (academic and research nonprofit) such as CHAI, CAIS, FAR AI and so on. I will add some clarification to the original post with more context.
(We will discuss the point on OP having board seats at Redwood in a separate comment)
Thanks for this detailed comment Jacob. We’re in agreement with your first point, but on re-reading the post we can see why it seems like we think the problem selection was also wrong—we don’t believe this. We will clarify the distinction between problem selection and execution in the main post soon.
Our main concerns was that we think it is important, when working on a problem where a lot of prior research has been done, to come in to it with a novel approach or insight. We think its possible the team could have done this via a more thorough literature review or engaging with domain experts. Where we may disagree is that our suggestion of doing more desk research before hand might result in researchers dismissing ideas too easily, and thus experimenting and learning less.
We think this is definitely possible, but feel it can be less costly in some cases, and in particular could have been useful in the case of the adversarial training project. As we write later on in the passage you quoted above, we think that the problem with the adversarial training project was that we think Redwood focused on an unusually challenging threat model (unrestricted adversarial examples), and although we think there were some aspects of the textual domain that make the problem easier, the large number of textual adversarial attacks indicated it was unlikely to be sufficient.
We will edit this section to make it more clear, but the MIRI critique is the MIRI hyperlink—Paul Christiano’s critique of Eliezer.
Hi Dawn!
What do you count as software engineering experience? The linked LinkedIn profile looks like he has > 10 years of experience in the field.
Our critique on lack of senior ML staff is focused specifically on lack of machine learning expertise (as opposed to general TAIS work). We are counting substantive software engineering experience such as his work at PayPal and TripleByte.
On the topic of general TAIS experience, I think Buck has at most 7 years experience as he joined MIRI in 2017. (It is our understanding that a decent portion of his time at MIRI was spent recruiting). That being said, years of experience is not the only measure of experience, Jacob Steinhardt comments above that he believes Buck is “a stronger researcher than most people with ML PhDs. He is weaker at empirical ML than this baseline, but very strong conceptually in ways that translate well to machine learning.”
Can you confirm that Redwood really fired them as opposed to them quitting? (The first is unusual in my experience; the second very common.) You mention employees quitting in various places but because they’re anonymous, I can’t tell whether that refers to the same people. Thanks!
To our knowledge, their more experienced ML research staff were let go. We refer to different employees quitting at later stages. In an earlier draft we had named a few of them, but decided to remove the names due to anonymity concerns.
Hi Bill, yes your understanding is correct—we will be writing a post in the future abotu Constellation, and we will share a draft ahead of time with you / Redwood.
Thanks for mentioning the $20M point Nate—I’ve edited the post to make this a little more clear and would suggest people use $14M as the number instead.
Hi Akash,
Thank you for sharing your thougths & those concrete action items—I agree it would be nice to have a set of recommendations in an ideal world.
This post took at least 50 hours (collectively) to write, and was delayed in publishing by a few days due to busy schedules. I think if we had more time, I would have shared the final version with a small set of non-redwood beta reviewers for comments which would have caught things like this (and e.g. Nunos’ comment).
We plan to do this for future posts (if you’re reading this and would like to give comments on future posts, please DM us!).
We’ll consider adding an intervention section to future reports time permitting (we still think there is value in sharing our observations, as a lot of this information is not available to people without relevant networks.
(I may come back (again, time permitting) and respond to your point on Redwood having many problems to deal with at a later stage)
Thanks Nuno, I’m sharing this comment with the other contributors and will respond in depth soon. I think you’re right that we could be more explicit on 3).
Update: this has now been edited in the original post.