Thereâs an asymmetry between people/âorgs that are more willing to publicly write impressions and things theyâve heard, and people/âorgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
The org really is doing a shady thing, and doesnât want to discuss it
The org really is doing the thing, but if you knew the full picture you wouldnât think it was shady
The claims are importantly inaccurate, but the org is not going to spend staff time coordinating a response
The claims are importantly inaccurate, and the org will post a comment next Tuesday that you probably wonât notice
I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/âbenefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/âLW comment checked by their orgâs communications staff, then that would significantly lower the time/âeffort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round...
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Mistaken information gets shared and people spend time debating or being confused about the false information
Many people never notice or forget that the mistaken information got corrected and it keeps getting believed and shared
Some people speculate that the mistaken claims werenât innocently shared, but that the board member was being evasive/âdishonest
People conclude that the organization /â board is incompetent and chaotic because they canât even get basic facts right
Fwiw, I think different views about this ideal/ânon-ideal distinction underlie a lot of disagreements about communicative norms in EA.
they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and itâs likely that committed people can easily deanonymize it)
Thereâs an asymmetry between people/âorgs that are more willing to publicly write impressions and things theyâve heard, and people/âorgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
The org really is doing a shady thing, and doesnât want to discuss it
The org really is doing the thing, but if you knew the full picture you wouldnât think it was shady
The claims are importantly inaccurate, but the org is not going to spend staff time coordinating a response
The claims are importantly inaccurate, and the org will post a comment next Tuesday that you probably wonât notice
Epistemic status: strong opinions, lightly held
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/âbenefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/âLW comment checked by their orgâs communications staff, then that would significantly lower the time/âeffort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
Or, rather, what staff at org X are thinking. (I donât think an org itself can meaningfully have beliefs: people have beliefs.)
Note: Although I mentioned Rethink Priorities earlier, Iâm not thinking about Rethink Priorities here.
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Mistaken information gets shared and people spend time debating or being confused about the false information
Many people never notice or forget that the mistaken information got corrected and it keeps getting believed and shared
Some people speculate that the mistaken claims werenât innocently shared, but that the board member was being evasive/âdishonest
People conclude that the organization /â board is incompetent and chaotic because they canât even get basic facts right
Fwiw, I think different views about this ideal/ânon-ideal distinction underlie a lot of disagreements about communicative norms in EA.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and itâs likely that committed people can easily deanonymize it)