Thanks for your thoughtful response here (and elsewhere). I definitely think youâre acting in good faith (again, I think sharing your evaluations with the labs beforehand and seeking information/âclarification from them is really big evidence of this), and I have appreciated both posts even if I was more on the critical side for this one. Iâm sorry that youâve found the response to this post difficult and I apologise if I contributed to that unfairly. I look forward to you continuing the series (I think with Anthropic?).
On the object level I donât think we actually disagree that muchâIâm very much in agreement with your sentiment on organisational governance, and how much this has been shown to be crucial in both the EA and AI spaces over the past year. I think allowing critical evaluations of AI Safety from inside the field is important to making sure the field stays healthy. I agree that many can achieve good impact working outside of explicitly EA-aligned organisations, and to not give those organisations a âpassâ because of that affiliationâespecially if they are in the early stages of their career. And I agree that rate at which Conjecture has scaled at will likely lead to organisational problems that may impact the quality of their output.
So on reflection, perhaps why my reaction to this post was more mixed than my reaction to the Redwood Post was is because you made some very strong and critical claims about Conjecture but the evidence you presented was often vague or a statement of your own beliefs.[1] So, for example, numerous concerns about Connorâs behaviour are stated in the article, but I donât have much to update on apart from âThe authors of Omega interpret these events as a sign of poor/âuntrustworthy characterâ, and if I donât share the same interpretation (or to the same degree),[2] our beliefs canât converge any more unless further evidence/âcontext of those claims is provided.
The same goes for technical assessments about the quality of Conjectureâs workâwhere the evidence is simply: âWe believe most of Conjectureâs publicly available research to date is low-quality.â Perhaps Iâm asking for more technical details about your evaluation of research lab output here which might not be what the post is designed for, but itâs probably the kind of evidence that would convince me most here.
A final example is on whether Conjecture has damaged the AI Safety cause among UK policymakers. Given the writers, sources, and policymakers in question are all anonymous I simply have very little ability to adjudicate the extent to which this claim is true. This is, I think, the downside of your decision to remain anonymousâit means that any trust that the authors of Omega have built up with their work in the AI Safety community canât be used to vouch for these claims where the evidence is more ambiguous.
I do accept, and take it as a point in your favour, that this may in large part be due to Conjectureâs reluctance to co-ordinate with the rest of the ML Community and make their work more available for public scrutiny
Thanks for your thoughtful response here (and elsewhere). I definitely think youâre acting in good faith (again, I think sharing your evaluations with the labs beforehand and seeking information/âclarification from them is really big evidence of this), and I have appreciated both posts even if I was more on the critical side for this one. Iâm sorry that youâve found the response to this post difficult and I apologise if I contributed to that unfairly. I look forward to you continuing the series (I think with Anthropic?).
On the object level I donât think we actually disagree that muchâIâm very much in agreement with your sentiment on organisational governance, and how much this has been shown to be crucial in both the EA and AI spaces over the past year. I think allowing critical evaluations of AI Safety from inside the field is important to making sure the field stays healthy. I agree that many can achieve good impact working outside of explicitly EA-aligned organisations, and to not give those organisations a âpassâ because of that affiliationâespecially if they are in the early stages of their career. And I agree that rate at which Conjecture has scaled at will likely lead to organisational problems that may impact the quality of their output.
So on reflection, perhaps why my reaction to this post was more mixed than my reaction to the Redwood Post was is because you made some very strong and critical claims about Conjecture but the evidence you presented was often vague or a statement of your own beliefs.[1] So, for example, numerous concerns about Connorâs behaviour are stated in the article, but I donât have much to update on apart from âThe authors of Omega interpret these events as a sign of poor/âuntrustworthy characterâ, and if I donât share the same interpretation (or to the same degree),[2] our beliefs canât converge any more unless further evidence/âcontext of those claims is provided.
The same goes for technical assessments about the quality of Conjectureâs workâwhere the evidence is simply: âWe believe most of Conjectureâs publicly available research to date is low-quality.â Perhaps Iâm asking for more technical details about your evaluation of research lab output here which might not be what the post is designed for, but itâs probably the kind of evidence that would convince me most here.
A final example is on whether Conjecture has damaged the AI Safety cause among UK policymakers. Given the writers, sources, and policymakers in question are all anonymous I simply have very little ability to adjudicate the extent to which this claim is true. This is, I think, the downside of your decision to remain anonymousâit means that any trust that the authors of Omega have built up with their work in the AI Safety community canât be used to vouch for these claims where the evidence is more ambiguous.
I do accept, and take it as a point in your favour, that this may in large part be due to Conjectureâs reluctance to co-ordinate with the rest of the ML Community and make their work more available for public scrutiny
For the record, my only contact with Connor personally has been to chat with him over a beer at the EAG afterparty