I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.
This does seem to assume that absolute size in difference is more important than proportions. I’m not sure how to think about whether or not this is the case.
This is a good point; however, I would also like to point out that it could be the case that a majority of “dedicated donors” don’t end up taking the pledge, without this becoming a norm. The norm instead could be “each individual should think through themselves, giving their own unique situations, whether or not taking the pledge is likely to be valuable,” which could lead to a situation where “dedicated donors” tend not to take the pledge, but not necessarily to a situation where, if you are a “dedicated donor,” you are expected not to take the pledge.
(I am highly uncertain as to whether or not this is how norms work; that is to say, whether norms connecting a group of people and a certain action could refrain from developing even though a majority of that group of people take that action.)