I’m really greatful you’ve wrote this detailed and specific focus.
I too have worried about the perception of EA from AI Ethics Researchers, may of whom are well established and reputable scientists who sincerely care about what many EAs care about, a safe AI. I’ve felt it’s a shame more respectful common language hasn’t been found there. I think some of what is missing is a reflection on communication. I’ve seen pretty nasty spirited tweets from EAs in response to TG and folks in her research network. Of course caution should be applied when reasoning from small numbers but if there is anything done on a group or bigger community level, like adversarial collaborations, discussion panels I have missed it, though I’d be interested in learning what’s been done. It just looks at face value like a misuse of resources to not be collaborating with them or trying to find more common ground if the ultimate values are similar.
I’m really greatful you’ve wrote this detailed and specific focus.
I too have worried about the perception of EA from AI Ethics Researchers, may of whom are well established and reputable scientists who sincerely care about what many EAs care about, a safe AI. I’ve felt it’s a shame more respectful common language hasn’t been found there. I think some of what is missing is a reflection on communication. I’ve seen pretty nasty spirited tweets from EAs in response to TG and folks in her research network. Of course caution should be applied when reasoning from small numbers but if there is anything done on a group or bigger community level, like adversarial collaborations, discussion panels I have missed it, though I’d be interested in learning what’s been done. It just looks at face value like a misuse of resources to not be collaborating with them or trying to find more common ground if the ultimate values are similar.