I agree with Conjecture’s reply that this reads more like a hitpiece than an even-handed evaluation.
I don’t think your recommendations follow from your observations, and such strong claims surely don’t follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture’s governance outreach may be net-negative and upsetting to politicians.
Conjecture’s CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture’s infohazard policy can be overturned by Connor or their owners.
They’re trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they’ve had bad interactions with conjecture staff or leadership when trying to tell them what they’re doing wrong.
Conjecture seems like they don’t talk with ML people.
I’m actually curious about why they’re doing 9, and further discussion on 10 and 8. But I don’t think any of the other points matter, at least to the depth you’ve covered them here, and I don’t know why you’re spending so much time on stuff that doesn’t matter or you can’t support. This could have been so much better if you had taken the research time spent on everything that wasn’t 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don’t think your arguments support your suggestions that
Don’t work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture’s work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don’t think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don’t know why you’re saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn’t building extremely large models… In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just… what? Ok, I’ve thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there’s a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don’t want to support their work. This seems pretty unlikely to me though! And you definitely don’t provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
I’m not myself an expert on PR (I’m skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
[Note: we edited point 3) for clarity on June 13 2023]
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
This is because they usually talk about the strongest case for x-risk when talking to reporters, and somehow get into the article, and then have the reporter speak positively about the cause.
You’ve also said that some people think conjecture may be decreasing goodwill with policy makers. This announcement seems like a lot of evidence against this. Though there is debate on whether its good, the policy-makers are certainly paying lip-service to AI alignment-type concerns. I also want to know why would I trust such people to report on policy-makers opinions. Are these some Discord randos or parliament aides, or political researchers looking at surveys among parliament leaders, or deepmind policy people, or what?
In general I reject that people shouldn’t talk to the government if they’re qualified (in a general sense) and have policy-goals which would be good to implement. If policy is to work its because someone did something. So its a good thing that Conjecture is doing something.
It is again really weird that you pull out OpenAI as an org with really strong corporate governance. Their charter is a laughing stock, and their policies did not stop them from reformat into a for-profit company once Sam, presumably, or whoever their leaders were at the time, saw they could make money.
I don’t know anything about Anthropic’s corporate governance structure. But I also don’t know much about Conjecture. I know at one point I tried to find Anthropic’s board of directors, and found nothing. But that was just a bunch of googling.
Conjecture’s infohazard policy not having legal force is bad, but not as bad as not having an infohazard policy in the first place. It seems like OpenAI and Anthropic have just as bad corporate governance structures in your book then. But you seem to think they have better structures. So I doubt whether or not having an infohazard policy with legal force is a crux for you here.
I’m very confused by your statements here, and would like you to explain like… why you think Conjecture’s is uniquely bad, so bad they shouldn’t get any funding, and we should consider shunning them from the community. Instead of just making the claim. This is where my crux lies.
I’m also curious about OpenAI and Anthropic’s corporate governance structures, but I don’t think its a crux. If you showed me OpenAI had a spectacular governance structure, I think I’d be more like “ah, well, in that case corporate governance structures don’t seem all that important then, and so its a positive Conjecture isn’t wasting money on this shown-to-be-useless-thing”.
(cross-posted to LessWrong)
I agree with Conjecture’s reply that this reads more like a hitpiece than an even-handed evaluation.
I don’t think your recommendations follow from your observations, and such strong claims surely don’t follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture’s governance outreach may be net-negative and upsetting to politicians.
Conjecture’s CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture’s infohazard policy can be overturned by Connor or their owners.
They’re trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they’ve had bad interactions with conjecture staff or leadership when trying to tell them what they’re doing wrong.
Conjecture seems like they don’t talk with ML people.
I’m actually curious about why they’re doing 9, and further discussion on 10 and 8. But I don’t think any of the other points matter, at least to the depth you’ve covered them here, and I don’t know why you’re spending so much time on stuff that doesn’t matter or you can’t support. This could have been so much better if you had taken the research time spent on everything that wasn’t 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don’t think your arguments support your suggestions that
Don’t work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture’s work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don’t think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don’t know why you’re saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn’t building extremely large models… In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just… what? Ok, I’ve thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there’s a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don’t want to support their work. This seems pretty unlikely to me though! And you definitely don’t provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
I’m not myself an expert on PR (I’m skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
Regarding your specific concerns about our recommendations:
1) We address this point in our response to Marius (5th paragraph)
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
[Note: we edited point 3) for clarity on June 13 2023]
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
This is because they usually talk about the strongest case for x-risk when talking to reporters, and somehow get into the article, and then have the reporter speak positively about the cause.
You’ve also said that some people think conjecture may be decreasing goodwill with policy makers. This announcement seems like a lot of evidence against this. Though there is debate on whether its good, the policy-makers are certainly paying lip-service to AI alignment-type concerns. I also want to know why would I trust such people to report on policy-makers opinions. Are these some Discord randos or parliament aides, or political researchers looking at surveys among parliament leaders, or deepmind policy people, or what?
In general I reject that people shouldn’t talk to the government if they’re qualified (in a general sense) and have policy-goals which would be good to implement. If policy is to work its because someone did something. So its a good thing that Conjecture is doing something.
It is again really weird that you pull out OpenAI as an org with really strong corporate governance. Their charter is a laughing stock, and their policies did not stop them from reformat into a for-profit company once Sam, presumably, or whoever their leaders were at the time, saw they could make money.
I don’t know anything about Anthropic’s corporate governance structure. But I also don’t know much about Conjecture. I know at one point I tried to find Anthropic’s board of directors, and found nothing. But that was just a bunch of googling.
Conjecture’s infohazard policy not having legal force is bad, but not as bad as not having an infohazard policy in the first place. It seems like OpenAI and Anthropic have just as bad corporate governance structures in your book then. But you seem to think they have better structures. So I doubt whether or not having an infohazard policy with legal force is a crux for you here.
I’m very confused by your statements here, and would like you to explain like… why you think Conjecture’s is uniquely bad, so bad they shouldn’t get any funding, and we should consider shunning them from the community. Instead of just making the claim. This is where my crux lies.
I’m also curious about OpenAI and Anthropic’s corporate governance structures, but I don’t think its a crux. If you showed me OpenAI had a spectacular governance structure, I think I’d be more like “ah, well, in that case corporate governance structures don’t seem all that important then, and so its a positive Conjecture isn’t wasting money on this shown-to-be-useless-thing”.