Yes, unfortunately I’ve also been hearing negatives about Conjecture, so much so that I was thinking of writing my own critical post (and for the record, I spoke to another non-Omega person who felt similarly). Now that your post is written, I won’t need to, but for the record, my three main concerns were as follows:
1. The dimension of honesty, and the genuineness of their business plan. I won’t repeat it here, because it was one of your main points, but I don’t think that it’s a way to run a business, to sell your investors on a product-oriented vision for the company, but to tell EAs that the focus is overwhelmingly on safety.
2. Turnover issues, including the interpretability team. I’ve encountered at least half a dozen stories of people working at or considering work at Conjecture, and I’ve yet to hear of any that were positive. This is about as negative a set of testimonials as I’ve heard about any EA organisation. Some prominent figures like Janus and Beren have left. In the last couple of months, turnover has been especially high—my understanding is that Connor told the interpretability team that they were to work instead on cognitive emulations, and most of them left. Much talent has been lost, and this wasn’t a smooth breakup. One aspect of this is that Conjecture abruptly cancelled an interpretability workshop, that they were scheduled to host, after some had already flown to the UK to attend it.
3. Overconfidence. Some will find Connor’s views very sane, but I don’t, and would be remiss to ignore:
Most staff thinking AGI ruin >60% likely and most expecting AGI in <7 years, and tweeting it.
i.e. including non-researchers—it at-least makes one wonder about groupthink.
Ranting about the harm of interpretability research at an EAG afterparty, so prominently that I heard about several times the next day
the impact of interpretability research is hard to judge, and this comes across as unprofessional.
When I put this together, I get an overall picture that makes it pretty hard to recommend people work with Conjecture, and I would also be thinking about how to disentangle things like MATS from it.
I currently work at Conjecture (this comment is in my personal capacity). Without commenting on any of your other points I would like to add the data point that I enjoy working here and I’ve grown a lot in personal and professional capacity while being here. 6⁄6 of colleagues I asked said they did too.
Another data point: I worked for Conjecture until recently, and I broadly agree with Jonathan’s assessment. It is a pretty impressive group of people and I enjoyed working with them. Work was ocassionaly quite intense but that is par for the course for such a young organisation thats moving incredibly fast in such a challenging field.
I would recommend working for Conjecture, especially for anyone located in Europe who wants to work in alignment.
Most staff thinking AGI ruin >60% likely and most expecting AGI in <7 years, and tweeting it.
i.e. including non-researchers—it at-least makes one wonder about groupthink.
I would expect a lot of selection effects on who goes to work for Conjecture; similarly I wouldn’t find it concerning if non-researchers at GreenPeace had extreme views about the environment.
I actually would find this at least somewhat concerning, because selection bias/selection effects are my biggest worry with smart people working in an area. If a study area is selected based upon any non-truthseeking motivations, or if people are pressured to go along with a view for non-truthseeking reasons, then it’s very easy to land into nonsense, where the consensus is based totally on selection effects, making them useless to us.
There’s a link to the comment by lukeprog below on the worst case scenario for smart people being dominated by selection effects:
One marker to watch out for is a kind of selection effect.
In some fields, only ‘true believers’ have any motivation to spend their entire careers studying the subject in the first place, and so the ‘mainstream’ in that field is absolutely nutty.
Case examples include philosophy of religion, New Testament studies, Historical Jesus studies, and Quranic studies. These fields differ from, say, cryptozoology in that the biggest names in the field, and the biggest papers, are published by very smart people in leading journals and look all very normal and impressive but those entire fields are so incredibly screwed by the selection effect that it’s only “radicals” who say things like, “Um, you realize that the ‘gospel of Mark’ is written in the genre of fiction, right?”
Why can’t non-research staff have an opinion about timelines? And why can’t staff tweet their timelines? Seems an overwhelmingly common EA thing to do.
I don’t think the issue is that they have an opinion, rather that they have the same opinion—like, ‘all the researchers have the same p(doom), even the non-researchers too’ is exactly the sort of thing I’d imagine hearing about a cultish org
I feel like you’re putting words put into my mouth a little bit, there. I didn’t say that their beliefs/behaviour were dispositively wrong, but that IF you have longer timelines, then you might start to wonder about groupthink.
That’s because in surveys and discussions of these issues even at MIRI, FHI, etc there have always been some researchers who have taken more mainstream views—and non-research staff usually have more mainstream views than researchers (which is not unreasonable if they’ve thought less about the issue).
Yes, unfortunately I’ve also been hearing negatives about Conjecture, so much so that I was thinking of writing my own critical post (and for the record, I spoke to another non-Omega person who felt similarly). Now that your post is written, I won’t need to, but for the record, my three main concerns were as follows:
1. The dimension of honesty, and the genuineness of their business plan. I won’t repeat it here, because it was one of your main points, but I don’t think that it’s a way to run a business, to sell your investors on a product-oriented vision for the company, but to tell EAs that the focus is overwhelmingly on safety.
2. Turnover issues, including the interpretability team. I’ve encountered at least half a dozen stories of people working at or considering work at Conjecture, and I’ve yet to hear of any that were positive. This is about as negative a set of testimonials as I’ve heard about any EA organisation. Some prominent figures like Janus and Beren have left. In the last couple of months, turnover has been especially high—my understanding is that Connor told the interpretability team that they were to work instead on cognitive emulations, and most of them left. Much talent has been lost, and this wasn’t a smooth breakup. One aspect of this is that Conjecture abruptly cancelled an interpretability workshop, that they were scheduled to host, after some had already flown to the UK to attend it.
3. Overconfidence. Some will find Connor’s views very sane, but I don’t, and would be remiss to ignore:
Thinking AGI 99%-likely by 2100
even though 90%+ can be normal
Most staff thinking AGI ruin >60% likely and most expecting AGI in <7 years, and tweeting it.
i.e. including non-researchers—it at-least makes one wonder about groupthink.
Ranting about the harm of interpretability research at an EAG afterparty, so prominently that I heard about several times the next day
the impact of interpretability research is hard to judge, and this comes across as unprofessional.
When I put this together, I get an overall picture that makes it pretty hard to recommend people work with Conjecture, and I would also be thinking about how to disentangle things like MATS from it.
I currently work at Conjecture (this comment is in my personal capacity). Without commenting on any of your other points I would like to add the data point that I enjoy working here and I’ve grown a lot in personal and professional capacity while being here. 6⁄6 of colleagues I asked said they did too.
Another data point: I worked for Conjecture until recently, and I broadly agree with Jonathan’s assessment. It is a pretty impressive group of people and I enjoyed working with them. Work was ocassionaly quite intense but that is par for the course for such a young organisation thats moving incredibly fast in such a challenging field.
I would recommend working for Conjecture, especially for anyone located in Europe who wants to work in alignment.
I would expect a lot of selection effects on who goes to work for Conjecture; similarly I wouldn’t find it concerning if non-researchers at GreenPeace had extreme views about the environment.
I actually would find this at least somewhat concerning, because selection bias/selection effects are my biggest worry with smart people working in an area. If a study area is selected based upon any non-truthseeking motivations, or if people are pressured to go along with a view for non-truthseeking reasons, then it’s very easy to land into nonsense, where the consensus is based totally on selection effects, making them useless to us.
There’s a link to the comment by lukeprog below on the worst case scenario for smart people being dominated by selection effects:
https://www.lesswrong.com/posts/fyZBtNB3Ki3fM4a6Y/some-heuristics-for-evaluating-the-soundness-of-the-academic#scZakQrAYm2Mck9QQ
Why can’t non-research staff have an opinion about timelines? And why can’t staff tweet their timelines? Seems an overwhelmingly common EA thing to do.
I don’t think the issue is that they have an opinion, rather that they have the same opinion—like, ‘all the researchers have the same p(doom), even the non-researchers too’ is exactly the sort of thing I’d imagine hearing about a cultish org
I feel like you’re putting words put into my mouth a little bit, there. I didn’t say that their beliefs/behaviour were dispositively wrong, but that IF you have longer timelines, then you might start to wonder about groupthink.
That’s because in surveys and discussions of these issues even at MIRI, FHI, etc there have always been some researchers who have taken more mainstream views—and non-research staff usually have more mainstream views than researchers (which is not unreasonable if they’ve thought less about the issue).