I did it in my head and I haven’t tried to put it into words so take this with a grain of salt.
Pros:
Orgs get time to correct misconceptions.
(Actually I think that’s pretty much the only pro but it’s a big pro.)
Cons:
It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There’s a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
Communicating beforehand would make me overly concerned about being nice to the people I talked to, and might prevent me from saying harsh but true things because I don’t want to feel mean.
Orgs can still respond to the post after it’s published, it’s not as if it’s impossible for them to respond at all.
Here are some relevant EA Forum/LW posts (the comments are relevant too):
It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people.
This is quite a scalable activity. When I used to do this, I had a spreadsheet to keep track, generated emails from a template, and had very little back and forth—orgs just saw a draft of their section, had a few days to comment, and then I might or might not take their feedback into account.
IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)
I remember removing an org entirely because they complained, though in that case they claimed they didn’t have enough time to engage with me (rather than the opposite). It’s also possible there are other cases I have forgotten. To your point, I have no objections to Michael’s “make me overly concerned about being nice” argument which I do think is true.
Thanks for clarifying! Really appreciate you engaging with this.
Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don’t think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.
Re: Orgs can still respond to the post after it’s published. Some orgs aren’t posting some information publicly on purpose, but they will tell you things in confidence if you ask privately. If you publicly blast them on one of these topics, they will not publicly respond. I know EAs can be allergic to these kind of dynamics, but politics is qualitatively different than ML research; managing relationships with multiple stakeholders with opposing views is delicate, and there are a bunch of bad actors working against AI safety in DC. You might be surprised by what kind of information is very dangerous for orgs to discuss publicly.
I’m just curious, have you discussed any of your concerns with somebody who has worked in policy for the US Government?
I did it in my head and I haven’t tried to put it into words so take this with a grain of salt.
Pros:
Orgs get time to correct misconceptions.
(Actually I think that’s pretty much the only pro but it’s a big pro.)
Cons:
It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There’s a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
Communicating beforehand would make me overly concerned about being nice to the people I talked to, and might prevent me from saying harsh but true things because I don’t want to feel mean.
Orgs can still respond to the post after it’s published, it’s not as if it’s impossible for them to respond at all.
Here are some relevant EA Forum/LW posts (the comments are relevant too):
https://forum.effectivealtruism.org/posts/f77iuXmgiiFgurnBu/run-posts-by-orgs
https://www.lesswrong.com/posts/Hsix7D2rHyumLAAys/run-posts-by-orgs#comments
https://forum.effectivealtruism.org/posts/hM4atR2MawJK7jmwe/building-cooperative-epistemology-response-to-ea-has-a-lying
https://forum.effectivealtruism.org/posts/tuSQBGgnoxvsXwXJ3/criticism-is-sanctified-in-ea-but-like-any-intervention
https://forum.effectivealtruism.org/posts/QH9BGAoh2xnCdn2yS/omega-s-shortform?commentId=manyAkGcg6mgZio8t
This is quite a scalable activity. When I used to do this, I had a spreadsheet to keep track, generated emails from a template, and had very little back and forth—orgs just saw a draft of their section, had a few days to comment, and then I might or might not take their feedback into account.
IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)
I remember removing an org entirely because they complained, though in that case they claimed they didn’t have enough time to engage with me (rather than the opposite). It’s also possible there are other cases I have forgotten. To your point, I have no objections to Michael’s “make me overly concerned about being nice” argument which I do think is true.
Cool, I might just be remembering that one instance.
Thanks for clarifying! Really appreciate you engaging with this.
Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don’t think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.
Re: Orgs can still respond to the post after it’s published. Some orgs aren’t posting some information publicly on purpose, but they will tell you things in confidence if you ask privately. If you publicly blast them on one of these topics, they will not publicly respond. I know EAs can be allergic to these kind of dynamics, but politics is qualitatively different than ML research; managing relationships with multiple stakeholders with opposing views is delicate, and there are a bunch of bad actors working against AI safety in DC. You might be surprised by what kind of information is very dangerous for orgs to discuss publicly.
I’m just curious, have you discussed any of your concerns with somebody who has worked in policy for the US Government?