It’s a lot more direct with AI though. Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I’m sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration.
It’s a lot more direct with AI though. Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I’m sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration.