Some of the sources list ways AI safety work specifically could be harmful, whereas other list more general types of /​ pathways to accidental harm which also happen to be relevant to AI safety work.
(Overall, I think a lot of AI safety work is very valuable, and people shouldn’t let somewhat generic worries about accidental harm strongly push them away from doing AI safety work, but that it’s also good to be aware of some accidental harm pathways, get feedback from sensible people before making big moves, etc. Obviously that sentence is fairly vague! But the above links can provide more details.)
[slightly lazy response] You may be interested in some of the sources linked to from the following pages:
accidental harm
information hazard
differential progress
Some of the sources list ways AI safety work specifically could be harmful, whereas other list more general types of /​ pathways to accidental harm which also happen to be relevant to AI safety work.
(Overall, I think a lot of AI safety work is very valuable, and people shouldn’t let somewhat generic worries about accidental harm strongly push them away from doing AI safety work, but that it’s also good to be aware of some accidental harm pathways, get feedback from sensible people before making big moves, etc. Obviously that sentence is fairly vague! But the above links can provide more details.)