I think that makes sense as a worry, but I think EAs’ caution and reluctance to model-build and argue about this stuff has turned out to do more harm than good, so we should change tactics. (And we very probably should have done things differently from the get-go.)
If you’re worried that it’s dangerous to talk about something publicly, I’d start off by thinking about it privately and talking about it over Signal with friends, etc. Then you can progress to contacting more EAs privately, then to posting publicly, as it becomes increasingly clear “there’s real value in talking about this stuff” and “there’s not a strong-enough reason to keep quiet”.
Step one in doing that, though, has to be a willingness to think about the topic at all, even if there isn’t clear public social proof that this is a normal or “approved” direction to think in. I think a thing that helps here is to recognize how small the group of “EA leaders and elite researchers” is, how divided their attention is between hundreds of different tasks and subtasks, and how easy it is for many things to therefore fall through the cracks or just-not-happen.
I think that makes sense as a worry, but I think EAs’ caution and reluctance to model-build and argue about this stuff has turned out to do more harm than good, so we should change tactics. (And we very probably should have done things differently from the get-go.)
If you’re worried that it’s dangerous to talk about something publicly, I’d start off by thinking about it privately and talking about it over Signal with friends, etc. Then you can progress to contacting more EAs privately, then to posting publicly, as it becomes increasingly clear “there’s real value in talking about this stuff” and “there’s not a strong-enough reason to keep quiet”.
Step one in doing that, though, has to be a willingness to think about the topic at all, even if there isn’t clear public social proof that this is a normal or “approved” direction to think in. I think a thing that helps here is to recognize how small the group of “EA leaders and elite researchers” is, how divided their attention is between hundreds of different tasks and subtasks, and how easy it is for many things to therefore fall through the cracks or just-not-happen.