I’m starting to think that the EA community will scare themselves into never taking any action at all. I don’t really feel like going over this point-for-point, because I think this post demonstrates a much greater failure at rationality. The short answer is: you’re doing cost/benefit analysis wrong. You’re zooming in on every possible critique, and determining it’s an organization that shouldn’t have talent directed toward it. Every alignment organization right now has relatively poor results. But the strategy for what to do with that isn’t funneling talent into ones that have slightly better results, but encouraging the spread of talent among many different approaches.
Here’s a more detailed response to these comments:
… You’re zooming in on every possible critique, and determining it’s an organization that shouldn’t have talent directed toward it.
We chose to write up critiques we felt were directly relevant to our end-line views (our goal was not to write up every possible critique). As we explain in another comment: “We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.”
We are supportive of the EA community pursuing a diversified research agenda, and individual organizations pursuing hits based agendas (we talk about that more in the first couple paragraphs of this comment). However, we do think that choosing the right organizations can make a difference, since top candidates often have the option for working at many organizations.
This is because we don’t agree that every alignment organization right now has relatively poor results. Here are some examples of results we find impressive, and organizations we think would be better places to work at than Conjecture:
I definitely agree with this, and I’m not very happy with the way Omega solely focuses on criticism, at the very least without any balanced assessment.
And given the nature of the problem, some poor initial results should be expected, by default.
Responding to both comments in this thread, we have written a reply to TheAthenians’s comment which addresses the points raised regarding our focus on criticism.
I’m starting to think that the EA community will scare themselves into never taking any action at all. I don’t really feel like going over this point-for-point, because I think this post demonstrates a much greater failure at rationality. The short answer is: you’re doing cost/benefit analysis wrong. You’re zooming in on every possible critique, and determining it’s an organization that shouldn’t have talent directed toward it. Every alignment organization right now has relatively poor results. But the strategy for what to do with that isn’t funneling talent into ones that have slightly better results, but encouraging the spread of talent among many different approaches.
Here’s a more detailed response to these comments:
We chose to write up critiques we felt were directly relevant to our end-line views (our goal was not to write up every possible critique). As we explain in another comment: “We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.”
We are supportive of the EA community pursuing a diversified research agenda, and individual organizations pursuing hits based agendas (we talk about that more in the first couple paragraphs of this comment). However, we do think that choosing the right organizations can make a difference, since top candidates often have the option for working at many organizations.
This is because we don’t agree that every alignment organization right now has relatively poor results. Here are some examples of results we find impressive, and organizations we think would be better places to work at than Conjecture:
Conceptual advance: ELK (ARC);
State-of-the-art in practically deployable method: constitutional AI (Anthropic);
Benchmarking of safety relevant problems: Trojan Detection Competition (CAIS)
Mechanistic interpretabilty: causal scrubbing (Redwood); toy models of superposition (Anthropic)
I definitely agree with this, and I’m not very happy with the way Omega solely focuses on criticism, at the very least without any balanced assessment.
And given the nature of the problem, some poor initial results should be expected, by default.
Responding to both comments in this thread, we have written a reply to TheAthenians’s comment which addresses the points raised regarding our focus on criticism.