Thank you for raising this point – you’re right that we don’t explain why we are writing this series, and we will update the sequence description to be more transparent on that point. The reasons you suggest are basically correct.
With increased attention to TAIS there are many people trying to get into TAIS roles. Without significant context on organizations, new entrants to the field will tend to go to TAIS organizations based on their prominence caused by factors such as total funding, media coverage, volume of output, etc. Much of the discussion we have observed around TAIS organizations, especially criticisms of them, happens behind closed doors in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.
We are concerned “that the attractiveness of working at an organization that is connected to the EA or TAIS communities makes it more likely for community members to take jobs at such organizations even if this will result in a lower lifetime impact than alternatives. Conjecture’s sponsorship of TAIS field building efforts may also lead new talent, who are unfamiliar with Conjecture’s history, to have an overly rosy impression of them.”
Regarding anonymization, we are also frustrated that we are not able to share more details. The sources we cite are credible to us (we believe the people who brought them to us to have high integrity). We try to provide relevant context where we can but don’t always have control over this. We don’t think an issue being based on (who you) trust, means that we shouldn’t bring these issues to light. We would encourage people who are making active decisions about their potential employment or collaboration with Conjecture to speak to people they trust and draw their own conclusions. We plan to edit all our recommendations to say this more explicitly.
I think the blurring between organisational design, strategy, and governance are somewhat separate to the research paradigm question. I can’t help wonder if these should be separated out—there seems to be some ‘correct’ paradigm that the authors of ‘Omega’ would like more funding and research in AI Safety, beyond correcting the organisational practices critiqued in this post and the Redwood one.
We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.
We don’t believe that there is a specific “paradigm” we advocate for. We would support the TAIS community pursuing a diversified research agenda.
Thanks for your thoughtful response here (and elsewhere). I definitely think you’re acting in good faith (again, I think sharing your evaluations with the labs beforehand and seeking information/clarification from them is really big evidence of this), and I have appreciated both posts even if I was more on the critical side for this one. I’m sorry that you’ve found the response to this post difficult and I apologise if I contributed to that unfairly. I look forward to you continuing the series (I think with Anthropic?).
On the object level I don’t think we actually disagree that much—I’m very much in agreement with your sentiment on organisational governance, and how much this has been shown to be crucial in both the EA and AI spaces over the past year. I think allowing critical evaluations of AI Safety from inside the field is important to making sure the field stays healthy. I agree that many can achieve good impact working outside of explicitly EA-aligned organisations, and to not give those organisations a ‘pass’ because of that affiliation—especially if they are in the early stages of their career. And I agree that rate at which Conjecture has scaled at will likely lead to organisational problems that may impact the quality of their output.
So on reflection, perhaps why my reaction to this post was more mixed than my reaction to the Redwood Post was is because you made some very strong and critical claims about Conjecture but the evidence you presented was often vague or a statement of your own beliefs.[1] So, for example, numerous concerns about Connor’s behaviour are stated in the article, but I don’t have much to update on apart from “The authors of Omega interpret these events as a sign of poor/untrustworthy character”, and if I don’t share the same interpretation (or to the same degree),[2] our beliefs can’t converge any more unless further evidence/context of those claims is provided.
The same goes for technical assessments about the quality of Conjecture’s work—where the evidence is simply: “We believe most of Conjecture’s publicly available research to date is low-quality.” Perhaps I’m asking for more technical details about your evaluation of research lab output here which might not be what the post is designed for, but it’s probably the kind of evidence that would convince me most here.
A final example is on whether Conjecture has damaged the AI Safety cause among UK policymakers. Given the writers, sources, and policymakers in question are all anonymous I simply have very little ability to adjudicate the extent to which this claim is true. This is, I think, the downside of your decision to remain anonymous—it means that any trust that the authors of Omega have built up with their work in the AI Safety community can’t be used to vouch for these claims where the evidence is more ambiguous.
I do accept, and take it as a point in your favour, that this may in large part be due to Conjecture’s reluctance to co-ordinate with the rest of the ML Community and make their work more available for public scrutiny
Thank you for raising this point – you’re right that we don’t explain why we are writing this series, and we will update the sequence description to be more transparent on that point. The reasons you suggest are basically correct.
With increased attention to TAIS there are many people trying to get into TAIS roles. Without significant context on organizations, new entrants to the field will tend to go to TAIS organizations based on their prominence caused by factors such as total funding, media coverage, volume of output, etc. Much of the discussion we have observed around TAIS organizations, especially criticisms of them, happens behind closed doors in conversations that junior people are usually not privy to. We wish to help disseminate this information more broadly to enable individuals to make a better informed decision.
We are concerned “that the attractiveness of working at an organization that is connected to the EA or TAIS communities makes it more likely for community members to take jobs at such organizations even if this will result in a lower lifetime impact than alternatives. Conjecture’s sponsorship of TAIS field building efforts may also lead new talent, who are unfamiliar with Conjecture’s history, to have an overly rosy impression of them.”
Regarding anonymization, we are also frustrated that we are not able to share more details. The sources we cite are credible to us (we believe the people who brought them to us to have high integrity). We try to provide relevant context where we can but don’t always have control over this. We don’t think an issue being based on (who you) trust, means that we shouldn’t bring these issues to light. We would encourage people who are making active decisions about their potential employment or collaboration with Conjecture to speak to people they trust and draw their own conclusions. We plan to edit all our recommendations to say this more explicitly.
We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.
We don’t believe that there is a specific “paradigm” we advocate for. We would support the TAIS community pursuing a diversified research agenda.
Thanks for your thoughtful response here (and elsewhere). I definitely think you’re acting in good faith (again, I think sharing your evaluations with the labs beforehand and seeking information/clarification from them is really big evidence of this), and I have appreciated both posts even if I was more on the critical side for this one. I’m sorry that you’ve found the response to this post difficult and I apologise if I contributed to that unfairly. I look forward to you continuing the series (I think with Anthropic?).
On the object level I don’t think we actually disagree that much—I’m very much in agreement with your sentiment on organisational governance, and how much this has been shown to be crucial in both the EA and AI spaces over the past year. I think allowing critical evaluations of AI Safety from inside the field is important to making sure the field stays healthy. I agree that many can achieve good impact working outside of explicitly EA-aligned organisations, and to not give those organisations a ‘pass’ because of that affiliation—especially if they are in the early stages of their career. And I agree that rate at which Conjecture has scaled at will likely lead to organisational problems that may impact the quality of their output.
So on reflection, perhaps why my reaction to this post was more mixed than my reaction to the Redwood Post was is because you made some very strong and critical claims about Conjecture but the evidence you presented was often vague or a statement of your own beliefs.[1] So, for example, numerous concerns about Connor’s behaviour are stated in the article, but I don’t have much to update on apart from “The authors of Omega interpret these events as a sign of poor/untrustworthy character”, and if I don’t share the same interpretation (or to the same degree),[2] our beliefs can’t converge any more unless further evidence/context of those claims is provided.
The same goes for technical assessments about the quality of Conjecture’s work—where the evidence is simply: “We believe most of Conjecture’s publicly available research to date is low-quality.” Perhaps I’m asking for more technical details about your evaluation of research lab output here which might not be what the post is designed for, but it’s probably the kind of evidence that would convince me most here.
A final example is on whether Conjecture has damaged the AI Safety cause among UK policymakers. Given the writers, sources, and policymakers in question are all anonymous I simply have very little ability to adjudicate the extent to which this claim is true. This is, I think, the downside of your decision to remain anonymous—it means that any trust that the authors of Omega have built up with their work in the AI Safety community can’t be used to vouch for these claims where the evidence is more ambiguous.
I do accept, and take it as a point in your favour, that this may in large part be due to Conjecture’s reluctance to co-ordinate with the rest of the ML Community and make their work more available for public scrutiny
For the record, my only contact with Connor personally has been to chat with him over a beer at the EAG afterparty