Thanks for clarifying! Really appreciate you engaging with this.
Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don’t think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.
Re: Orgs can still respond to the post after it’s published. Some orgs aren’t posting some information publicly on purpose, but they will tell you things in confidence if you ask privately. If you publicly blast them on one of these topics, they will not publicly respond. I know EAs can be allergic to these kind of dynamics, but politics is qualitatively different than ML research; managing relationships with multiple stakeholders with opposing views is delicate, and there are a bunch of bad actors working against AI safety in DC. You might be surprised by what kind of information is very dangerous for orgs to discuss publicly.
I’m just curious, have you discussed any of your concerns with somebody who has worked in policy for the US Government?
Thanks for clarifying! Really appreciate you engaging with this.
Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don’t think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.
Re: Orgs can still respond to the post after it’s published. Some orgs aren’t posting some information publicly on purpose, but they will tell you things in confidence if you ask privately. If you publicly blast them on one of these topics, they will not publicly respond. I know EAs can be allergic to these kind of dynamics, but politics is qualitatively different than ML research; managing relationships with multiple stakeholders with opposing views is delicate, and there are a bunch of bad actors working against AI safety in DC. You might be surprised by what kind of information is very dangerous for orgs to discuss publicly.
I’m just curious, have you discussed any of your concerns with somebody who has worked in policy for the US Government?