Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
“There’s no reason to rush to hostility”
“If showing hostility works to convey the situation, then hostility could be merited.”
“And I really hope it’s not necessary to advance into hostility.”
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?
Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?