We think there are many more impactful places to work, including non-profits such as Redwood, CAIS and FAR; alignment teams at Anthropic, OpenAI and DeepMind; or working with academics such as Stuart Russell, Sam Bowman, Jacob Steinhardt or David Krueger. Note we would not in general recommend working at capabilities-oriented teams at Anthropic, OpenAI, DeepMind or other AGI-focused companies.
Additionally, Conjecture seems relatively weak for skill building [...] We expect most ML engineering or research roles at prominent AI labs to offer better mentorship than Conjecture. Although we would hesitate to recommend taking a position at a capabilities-focused lab purely for skill building, we find it plausible that Conjecture could end up being net-negative, and so do not view Conjecture as a safer option in this regard than most competing firms.
I don’t work in AI safety and am not well-informed on the orgs here, but did want to comment on this as this recommendation might benefit from some clarity about who the target audience is.
As written, the claims sound something like:
CAIS et al., alignment teams at Anthropic et al., and working with Stuart Russel et al., are better places to work than Conjecture
Though not necessarily recommended, capabilities research at prominent AI labs is likely to be better than working at Conjecture for skill building, since Conjecture is not necessarily safer.
However:
The suggested alternatives don’t seem like they would be able to absorb a significant amount of additional talent, especially given the increase in interest in AI.
I have spoken to a few people working in AI / AI field building who perceive mentoring to be a bottleneck in AI safety at the moment.
If both of the above are true, what would your recommendation be to someone who had an offer from Conjecture, but not your recommended alternatives? E.g., choosing between independent research funded by LTFF VS working for Conjecture?
Just seeking a bit more clarity about whether this recommendation is mainly targeted at people who might have a choice between Conjecture and your alternatives, or whether this is a blanket recommendation that one should reject offers from Conjecture, regardless of seniority and what their alternatives are, or somewhere in between.
Hi Bruce, thanks for this thoughtful comment. We think Conjecture needs to address key concerns before we would recommend working there, although we could imagine Conjecture being the best option for a small fraction of people who are (a) excited by their current CoEm approach, (b) can operate independently in an environment with limited mentorship, (c) are confident they can withstand internal pressure (if there is a push to work on capabilities). As a result of these (and other) comments in this comment thread, we will be updating our recommendation to work at Conjecture.
That being said, we expect it to be rare that an individual would have an offer from Conjecture but not have access to other opportunities that are better than independent research. In practice many organizations end up competing for the same, relatively small pool of the very top candidates. Our guess is that most individuals who could receive an offer from Conjecture could pursue one of the paths outlined above in our replies to Marius such as being a research assistant or PhD student in academia, or working in an ML engineering position in an applied team at a major tech company (if not from more promising places like the ones we discuss in the original post). We think these positions can absorb a fairly large amount of talent, although we note that most AI/ML fields are fairly competitive.
Thanks for writing this! RE: We would advise against working at Conjecture
I don’t work in AI safety and am not well-informed on the orgs here, but did want to comment on this as this recommendation might benefit from some clarity about who the target audience is.
As written, the claims sound something like:
CAIS et al., alignment teams at Anthropic et al., and working with Stuart Russel et al., are better places to work than Conjecture
Though not necessarily recommended, capabilities research at prominent AI labs is likely to be better than working at Conjecture for skill building, since Conjecture is not necessarily safer.
However:
The suggested alternatives don’t seem like they would be able to absorb a significant amount of additional talent, especially given the increase in interest in AI.
I have spoken to a few people working in AI / AI field building who perceive mentoring to be a bottleneck in AI safety at the moment.
If both of the above are true, what would your recommendation be to someone who had an offer from Conjecture, but not your recommended alternatives? E.g., choosing between independent research funded by LTFF VS working for Conjecture?
Just seeking a bit more clarity about whether this recommendation is mainly targeted at people who might have a choice between Conjecture and your alternatives, or whether this is a blanket recommendation that one should reject offers from Conjecture, regardless of seniority and what their alternatives are, or somewhere in between.
Thanks again!
Hi Bruce, thanks for this thoughtful comment. We think Conjecture needs to address key concerns before we would recommend working there, although we could imagine Conjecture being the best option for a small fraction of people who are (a) excited by their current CoEm approach, (b) can operate independently in an environment with limited mentorship, (c) are confident they can withstand internal pressure (if there is a push to work on capabilities). As a result of these (and other) comments in this comment thread, we will be updating our recommendation to work at Conjecture.
That being said, we expect it to be rare that an individual would have an offer from Conjecture but not have access to other opportunities that are better than independent research. In practice many organizations end up competing for the same, relatively small pool of the very top candidates. Our guess is that most individuals who could receive an offer from Conjecture could pursue one of the paths outlined above in our replies to Marius such as being a research assistant or PhD student in academia, or working in an ML engineering position in an applied team at a major tech company (if not from more promising places like the ones we discuss in the original post). We think these positions can absorb a fairly large amount of talent, although we note that most AI/ML fields are fairly competitive.