We might mostly be arguing about semantics. In a similar discussion a few days ago I was making the literal analogy of “if you were worried about EA having bad effects on the world via the same kind of mechanism as the rise of communism, a large fraction of the things under the AI section should go into the ‘concern’ column, not the ‘success’ column”. Your analogy with Marx illustrates that point.
I do disagree with your last sentence. The thing that people are endorsing is very much both a social movement as well as some object level claims. I think it differs between people, but there is a lot of endorsing AI Safety as a social movement. Social proof is usually the primary thing evoked these days in order to convince people.
We might mostly be arguing about semantics. In a similar discussion a few days ago I was making the literal analogy of “if you were worried about EA having bad effects on the world via the same kind of mechanism as the rise of communism, a large fraction of the things under the AI section should go into the ‘concern’ column, not the ‘success’ column”. Your analogy with Marx illustrates that point.
I do disagree with your last sentence. The thing that people are endorsing is very much both a social movement as well as some object level claims. I think it differs between people, but there is a lot of endorsing AI Safety as a social movement. Social proof is usually the primary thing evoked these days in order to convince people.