2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
[Note: we edited point 3) for clarity on June 13 2023]
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
This is because they usually talk about the strongest case for x-risk when talking to reporters, and somehow get into the article, and then have the reporter speak positively about the cause.
You’ve also said that some people think conjecture may be decreasing goodwill with policy makers. This announcement seems like a lot of evidence against this. Though there is debate on whether its good, the policy-makers are certainly paying lip-service to AI alignment-type concerns. I also want to know why would I trust such people to report on policy-makers opinions. Are these some Discord randos or parliament aides, or political researchers looking at surveys among parliament leaders, or deepmind policy people, or what?
In general I reject that people shouldn’t talk to the government if they’re qualified (in a general sense) and have policy-goals which would be good to implement. If policy is to work its because someone did something. So its a good thing that Conjecture is doing something.
It is again really weird that you pull out OpenAI as an org with really strong corporate governance. Their charter is a laughing stock, and their policies did not stop them from reformat into a for-profit company once Sam, presumably, or whoever their leaders were at the time, saw they could make money.
I don’t know anything about Anthropic’s corporate governance structure. But I also don’t know much about Conjecture. I know at one point I tried to find Anthropic’s board of directors, and found nothing. But that was just a bunch of googling.
Conjecture’s infohazard policy not having legal force is bad, but not as bad as not having an infohazard policy in the first place. It seems like OpenAI and Anthropic have just as bad corporate governance structures in your book then. But you seem to think they have better structures. So I doubt whether or not having an infohazard policy with legal force is a crux for you here.
I’m very confused by your statements here, and would like you to explain like… why you think Conjecture’s is uniquely bad, so bad they shouldn’t get any funding, and we should consider shunning them from the community. Instead of just making the claim. This is where my crux lies.
I’m also curious about OpenAI and Anthropic’s corporate governance structures, but I don’t think its a crux. If you showed me OpenAI had a spectacular governance structure, I think I’d be more like “ah, well, in that case corporate governance structures don’t seem all that important then, and so its a positive Conjecture isn’t wasting money on this shown-to-be-useless-thing”.
Regarding your specific concerns about our recommendations:
1) We address this point in our response to Marius (5th paragraph)
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
[Note: we edited point 3) for clarity on June 13 2023]
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
This is because they usually talk about the strongest case for x-risk when talking to reporters, and somehow get into the article, and then have the reporter speak positively about the cause.
You’ve also said that some people think conjecture may be decreasing goodwill with policy makers. This announcement seems like a lot of evidence against this. Though there is debate on whether its good, the policy-makers are certainly paying lip-service to AI alignment-type concerns. I also want to know why would I trust such people to report on policy-makers opinions. Are these some Discord randos or parliament aides, or political researchers looking at surveys among parliament leaders, or deepmind policy people, or what?
In general I reject that people shouldn’t talk to the government if they’re qualified (in a general sense) and have policy-goals which would be good to implement. If policy is to work its because someone did something. So its a good thing that Conjecture is doing something.
It is again really weird that you pull out OpenAI as an org with really strong corporate governance. Their charter is a laughing stock, and their policies did not stop them from reformat into a for-profit company once Sam, presumably, or whoever their leaders were at the time, saw they could make money.
I don’t know anything about Anthropic’s corporate governance structure. But I also don’t know much about Conjecture. I know at one point I tried to find Anthropic’s board of directors, and found nothing. But that was just a bunch of googling.
Conjecture’s infohazard policy not having legal force is bad, but not as bad as not having an infohazard policy in the first place. It seems like OpenAI and Anthropic have just as bad corporate governance structures in your book then. But you seem to think they have better structures. So I doubt whether or not having an infohazard policy with legal force is a crux for you here.
I’m very confused by your statements here, and would like you to explain like… why you think Conjecture’s is uniquely bad, so bad they shouldn’t get any funding, and we should consider shunning them from the community. Instead of just making the claim. This is where my crux lies.
I’m also curious about OpenAI and Anthropic’s corporate governance structures, but I don’t think its a crux. If you showed me OpenAI had a spectacular governance structure, I think I’d be more like “ah, well, in that case corporate governance structures don’t seem all that important then, and so its a positive Conjecture isn’t wasting money on this shown-to-be-useless-thing”.