This is a very good point. I think the current sentiment comes from two sides:
Not wanting to alienate or make enemies with AI researchers, because alienating them from safety work would be even more catastrophic (this is a good reason)
Being intellectually fascinated by AI, and finding it really cool in a nerdy way (this is a bad reason, and I remember someone remarking that Bostroms book might have been hugely net-negative because it made many people more interested in AGI)
I agree that the current level of disincentives for working on capabilities is too low, and I resolve to telling AI capabilities people that I think their work is very harmful, while staying cordial with them.
This is a very good point. I think the current sentiment comes from two sides:
Not wanting to alienate or make enemies with AI researchers, because alienating them from safety work would be even more catastrophic (this is a good reason)
Being intellectually fascinated by AI, and finding it really cool in a nerdy way (this is a bad reason, and I remember someone remarking that Bostroms book might have been hugely net-negative because it made many people more interested in AGI)
I agree that the current level of disincentives for working on capabilities is too low, and I resolve to telling AI capabilities people that I think their work is very harmful, while staying cordial with them.