Here’s a more detailed response to these comments:
… You’re zooming in on every possible critique, and determining it’s an organization that shouldn’t have talent directed toward it.
We chose to write up critiques we felt were directly relevant to our end-line views (our goal was not to write up every possible critique). As we explain in another comment: “We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.”
We are supportive of the EA community pursuing a diversified research agenda, and individual organizations pursuing hits based agendas (we talk about that more in the first couple paragraphs of this comment). However, we do think that choosing the right organizations can make a difference, since top candidates often have the option for working at many organizations.
This is because we don’t agree that every alignment organization right now has relatively poor results. Here are some examples of results we find impressive, and organizations we think would be better places to work at than Conjecture:
Here’s a more detailed response to these comments:
We chose to write up critiques we felt were directly relevant to our end-line views (our goal was not to write up every possible critique). As we explain in another comment: “We believe that an organization should be graded on multiple metrics. Their outputs are where we would put the most weight. However, their strategy and governance are also key. The last year has brought into sharp relief the importance of strong organizational governance.”
We are supportive of the EA community pursuing a diversified research agenda, and individual organizations pursuing hits based agendas (we talk about that more in the first couple paragraphs of this comment). However, we do think that choosing the right organizations can make a difference, since top candidates often have the option for working at many organizations.
This is because we don’t agree that every alignment organization right now has relatively poor results. Here are some examples of results we find impressive, and organizations we think would be better places to work at than Conjecture:
Conceptual advance: ELK (ARC);
State-of-the-art in practically deployable method: constitutional AI (Anthropic);
Benchmarking of safety relevant problems: Trojan Detection Competition (CAIS)
Mechanistic interpretabilty: causal scrubbing (Redwood); toy models of superposition (Anthropic)