I read this post kind of quickly, so apologies if I’m misunderstanding. It seems to me that this post’s claim is basically:
Eliezer wrote some arguments about what he believes about AI safety.
People updated toward Eliezer’s beliefs.
Therefore, people defer too much to Eliezer.
I think this is dismissing a different (and much more likely IMO) possibility, which is that Eliezer’s arguments were good, and people updated based on the strength of the arguments.
(Even if his recent posts didn’t contain novel arguments, the arguments still could have been novel to many readers.)
I’m a bit confused by both this post and comments about questions like what level/timing the deference happens.
Speaking for myself, if an internet rando wrote a random blog post called “AGI Ruin: A List of Lethalities,” I probably would not read it. But I did read Yudkowsky’s post carefully and thought about it nontrivially, mostly due to his track record and writing ability (rather than e.g. because the title was engaging or because the first paragraph was really well-argued).
There is plenty of evidence against that. His arguments on other subjects aren’t good (see OP), his arguments on AI aren’t informed by academic expertise or industry experience, his predictions are bad,etc.
I read this post kind of quickly, so apologies if I’m misunderstanding. It seems to me that this post’s claim is basically:
Eliezer wrote some arguments about what he believes about AI safety.
People updated toward Eliezer’s beliefs.
Therefore, people defer too much to Eliezer.
I think this is dismissing a different (and much more likely IMO) possibility, which is that Eliezer’s arguments were good, and people updated based on the strength of the arguments.
(Even if his recent posts didn’t contain novel arguments, the arguments still could have been novel to many readers.)
I’m a bit confused by both this post and comments about questions like what level/timing the deference happens.
Speaking for myself, if an internet rando wrote a random blog post called “AGI Ruin: A List of Lethalities,” I probably would not read it. But I did read Yudkowsky’s post carefully and thought about it nontrivially, mostly due to his track record and writing ability (rather than e.g. because the title was engaging or because the first paragraph was really well-argued).
“which is that Eliezer’s arguments were good,”
There is plenty of evidence against that. His arguments on other subjects aren’t good (see OP), his arguments on AI aren’t informed by academic expertise or industry experience, his predictions are bad,etc.