I really like this idea and would love to organise and participate in such a fellowship!
To address this concern:
Similarly, this might make EA look unwelcoming and uncooperative from the outside.
It might be better to avoid calling it “red-teaming”. According to Wikipedia, red teams are used “cybersecurity, airport security, the military, and intelligence agencies” so the connotations of the word are probably not great.
Maybe we could use more of a scout mindset framing to make the framing less adversarial.
I find that framework to be inherently cooperative / non-violent. So rather than asking “why is X wrong?” we ask “what if X were wrong? What might the reasons for this be?”. (or something like this)
I agree that it’s important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren’t or forced an update in that direction, the overall amount of funding shifted would be fairly small.
You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I don’t think that’s true. EAs are easily nerd sniped, and there isn’t any kind of “efficient market” for prioritizing high impact questions. There’s also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But that’s precisely where we should be focusing more attention.
I really like this idea and would love to organise and participate in such a fellowship!
To address this concern:
It might be better to avoid calling it “red-teaming”. According to Wikipedia, red teams are used “cybersecurity, airport security, the military, and intelligence agencies” so the connotations of the word are probably not great.
Maybe we could use more of a scout mindset framing to make the framing less adversarial.
I find that framework to be inherently cooperative / non-violent. So rather than asking “why is X wrong?” we ask “what if X were wrong? What might the reasons for this be?”. (or something like this)
I agree that it’s important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren’t or forced an update in that direction, the overall amount of funding shifted would be fairly small.
You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I don’t think that’s true. EAs are easily nerd sniped, and there isn’t any kind of “efficient market” for prioritizing high impact questions. There’s also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But that’s precisely where we should be focusing more attention.