Maybe we’re getting too much into the semantics here but I would have found a headline of “we believe there are better places to work at” much more appropriate for the kind of statement they are making. 1. A blanket unconditional statement like this seems unjustified. Like I said before, if you believe in CoEm, Conjecture probably is the right place to work for. 2. Where does the “relatively weak for skill building” come from? A lot of their research isn’t public, a lot of engineering skills are not very tangible from the outside, etc. Why didn’t they just ask the many EA-aligned employees at Conjecture about what they thought of the skills they learned? Seems like such an easy way to correct for a potential mischaracterization. 3. Almost all AI alignment organizations are “plausibly” net negative. What if ARC evals underestimates their gain-of-function research? What if Redwood’s advances in interpretability lead to massive capability gains? What if CAIS’s efforts with the letter had backfired and rallied everyone against AI safety? This bar is basically meaningless without expected values.
Does that clarify where my skepticism comes from? Also, once again, my arguments should not be seen as a recommendation for Conjecture. I do agree with many of the criticisms made in the post.
Maybe we’re getting too much into the semantics here but I would have found a headline of “we believe there are better places to work at” much more appropriate for the kind of statement they are making.
1. A blanket unconditional statement like this seems unjustified. Like I said before, if you believe in CoEm, Conjecture probably is the right place to work for.
2. Where does the “relatively weak for skill building” come from? A lot of their research isn’t public, a lot of engineering skills are not very tangible from the outside, etc. Why didn’t they just ask the many EA-aligned employees at Conjecture about what they thought of the skills they learned? Seems like such an easy way to correct for a potential mischaracterization.
3. Almost all AI alignment organizations are “plausibly” net negative. What if ARC evals underestimates their gain-of-function research? What if Redwood’s advances in interpretability lead to massive capability gains? What if CAIS’s efforts with the letter had backfired and rallied everyone against AI safety? This bar is basically meaningless without expected values.
Does that clarify where my skepticism comes from? Also, once again, my arguments should not be seen as a recommendation for Conjecture. I do agree with many of the criticisms made in the post.