I really like this idea and would love to organise and participate in such a fellowship!
To address this concern:
Similarly, this might make EA look unwelcoming and uncooperative from the outside.
It might be better to avoid calling it âred-teamingâ. According to Wikipedia, red teams are used âcybersecurity, airport security, the military, and intelligence agenciesâ so the connotations of the word are probably not great.
Maybe we could use more of a scout mindset framing to make the framing less adversarial.
I find that framework to be inherently cooperative /â non-violent. So rather than asking âwhy is X wrong?â we ask âwhat if X were wrong? What might the reasons for this be?â. (or something like this)
I agree that itâs important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they werenât or forced an update in that direction, the overall amount of funding shifted would be fairly small.
You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I donât think thatâs true. EAs are easily nerd sniped, and there isnât any kind of âefficient marketâ for prioritizing high impact questions. Thereâs also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But thatâs precisely where we should be focusing more attention.
I really like this idea and would love to organise and participate in such a fellowship!
To address this concern:
It might be better to avoid calling it âred-teamingâ. According to Wikipedia, red teams are used âcybersecurity, airport security, the military, and intelligence agenciesâ so the connotations of the word are probably not great.
Maybe we could use more of a scout mindset framing to make the framing less adversarial.
I find that framework to be inherently cooperative /â non-violent. So rather than asking âwhy is X wrong?â we ask âwhat if X were wrong? What might the reasons for this be?â. (or something like this)
I agree that itâs important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they werenât or forced an update in that direction, the overall amount of funding shifted would be fairly small.
You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I donât think thatâs true. EAs are easily nerd sniped, and there isnât any kind of âefficient marketâ for prioritizing high impact questions. Thereâs also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But thatâs precisely where we should be focusing more attention.