“Around longtermism, there doesn’t seem to be much public organization evaluation or criticism. I think one issue is that many of the potential evaluators are social peers of the people they would be evaluating”
The social tightness and revolving door nature of AI safety certainly creates bad group think dynamics.
However on the longtermist/AI Safety side I think the post misses the fact that these communities barely engage with criticism, even very well written pieces by credentialed folks.
Case in point, a recent sequence of a couple critical posts on deceptive alignment by someone who I know is a very well qualified data scientist/ML researcher:
This post has at best 1-2 thought out comments that actually engage with the subject matter. Compare that to the rest of the LW forum which has thousands of people doomsaying about recent advances in LLM capability just in the last week.
To me the main difference between global health initiatives and the AI Safety / Longtermist crowd isn’t that people are uncomfortable voicing criticism. Instead it seems that global health folks are far more willing to engage and seek out external auditing, whereas AI Safety researchers either ignore or actively discourage dissent due to short timelines.
Good point. I was trying to keep this post focused on one specific bottleneck of criticism, I definitely agree there are others too.
I added the following text, to clarify:
To be clear, there are many bottlenecks between “someone is in a place to come up with a valuable critique” and “different decisions actually get made.” This process is costly and precarious at each step. For instance, decision makers think in very different ways than critics realize, so it’s easy for critics to waste a lot of time writing to them.
This post just focuses on the challenges that come from the challenges of things being uncomfortable to say. Going through the entire pipeline would require far more words.
“Around longtermism, there doesn’t seem to be much public organization evaluation or criticism. I think one issue is that many of the potential evaluators are social peers of the people they would be evaluating”
The social tightness and revolving door nature of AI safety certainly creates bad group think dynamics.
However on the longtermist/AI Safety side I think the post misses the fact that these communities barely engage with criticism, even very well written pieces by credentialed folks.
Case in point, a recent sequence of a couple critical posts on deceptive alignment by someone who I know is a very well qualified data scientist/ML researcher:
https://www.lesswrong.com/posts/RTkatYxJWvXR4Qbyd/deceptive-alignment-is-less-than-1-likely-by-default#comments
This post has at best 1-2 thought out comments that actually engage with the subject matter. Compare that to the rest of the LW forum which has thousands of people doomsaying about recent advances in LLM capability just in the last week.
To me the main difference between global health initiatives and the AI Safety / Longtermist crowd isn’t that people are uncomfortable voicing criticism. Instead it seems that global health folks are far more willing to engage and seek out external auditing, whereas AI Safety researchers either ignore or actively discourage dissent due to short timelines.
Good point. I was trying to keep this post focused on one specific bottleneck of criticism, I definitely agree there are others too.
I added the following text, to clarify:
Thanks for clarifying. :) Sorry to derail.
Thanks for the point. I also had someone else make a similar comment in the draft, I should have expected others to raise it as well.