Changing your mind in the face of new evidence is certainly commendable. In this case, we were highlighting that Connor has switched from confidently holding one extreme position (founding an organization dedicated to open-source output with all research conducted in public) to the opposite extreme (founding an organization with one of the most restrictive non-disclosure policies) without any substantial new evidence, and with little in the way of a public explanation.
In particular, we wanted to highlight that Conjecture may in the future radically depart from their current info-hazard policy. To the best of our knowledge, the info-hazard policy has no legal force: it is a policy maintained at the discretion of Conjecture leadership. Given Connor has previously radically changed his mind without corresponding extreme changes in the world, we should not be surprised if a major change of strategy occurs again. As such, we would suggest viewing their info-hazard policy as a short-term stance not a long-term commitment. This isn’t necessarily a bad thing – we’d like Conjecture to share more details on their CogEm approach, for example. However, since the info-hazard policy has been repeatedly highlighted by Conjecture as a reason to trust them over other labs, we felt it was important to observe the flimsy base it is built on.
Separately, we think the degree of Connor’s stated confidence in these and other beliefs (e.g. 99% on AGI by 2100, hat-tip to Ryan Carey) is far from demonstrating good epistemics. A charitable interpretation is that he tends to express himself more strongly than he truly feels. A cynical interpretation would be that his claims are strongly influenced by the incentives he is currently experiencing, a natural human tendency: for example, championing open-source when it was financially and socially rewarded at EleutherAI, and championing secrecy when working with a more cautious group of individuals.
(Thanks to your comment, we reviewed our post and noticed this point (which was made more clearly in earlier drafts) had been de-emphasized. We will be editing the post to make this point more clear in the key points, the top-level of the section and in our views section as well.)
may in the future radically depart from their current… policy
Organisations do this all the time, often without much prior reason to think they will (e.g. OpenAI re non-profit->for profit, and open-source->closed source). Saying that updating beliefs (imo in the right direction regarding x-risk) is bad because it makes this more likely is cynical and unfair.
Changing your mind in the face of new evidence is certainly commendable. In this case, we were highlighting that Connor has switched from confidently holding one extreme position (founding an organization dedicated to open-source output with all research conducted in public) to the opposite extreme (founding an organization with one of the most restrictive non-disclosure policies) without any substantial new evidence, and with little in the way of a public explanation.
In particular, we wanted to highlight that Conjecture may in the future radically depart from their current info-hazard policy. To the best of our knowledge, the info-hazard policy has no legal force: it is a policy maintained at the discretion of Conjecture leadership. Given Connor has previously radically changed his mind without corresponding extreme changes in the world, we should not be surprised if a major change of strategy occurs again. As such, we would suggest viewing their info-hazard policy as a short-term stance not a long-term commitment. This isn’t necessarily a bad thing – we’d like Conjecture to share more details on their CogEm approach, for example. However, since the info-hazard policy has been repeatedly highlighted by Conjecture as a reason to trust them over other labs, we felt it was important to observe the flimsy base it is built on.
Separately, we think the degree of Connor’s stated confidence in these and other beliefs (e.g. 99% on AGI by 2100, hat-tip to Ryan Carey) is far from demonstrating good epistemics. A charitable interpretation is that he tends to express himself more strongly than he truly feels. A cynical interpretation would be that his claims are strongly influenced by the incentives he is currently experiencing, a natural human tendency: for example, championing open-source when it was financially and socially rewarded at EleutherAI, and championing secrecy when working with a more cautious group of individuals.
(Thanks to your comment, we reviewed our post and noticed this point (which was made more clearly in earlier drafts) had been de-emphasized. We will be editing the post to make this point more clear in the key points, the top-level of the section and in our views section as well.)
Organisations do this all the time, often without much prior reason to think they will (e.g. OpenAI re non-profit->for profit, and open-source->closed source). Saying that updating beliefs (imo in the right direction regarding x-risk) is bad because it makes this more likely is cynical and unfair.