As Jakub has mentioned above, we have reviewed the points in his comment and fully support Anima International’s wish to share their perspective in this thread. However, Anima’s description of the events above does not align with our understanding of the events that took place, primarily within points 1,5, and 6. We have declined to include our perspective here. The most time-consuming part of our commitment to Representation, Equity, and Inclusion has been responding to hostile communications in the EA community about the topic, such as this one. We prefer to use our time and generously donated funds towards our core programs. Therefore, we will not be engaging any further in this thread.
Why was this response downvoted so heavily? (This is not a rhetorical question—I’m genuinely curious what the specific reasons were.)
As Jakub has mentioned above, we have reviewed the points in his comment and fully support Anima International’s wish to share their perspective in this thread. However, Anima’s description of the events above does not align with our understanding of the events that took place, primarily within points 1,5, and 6.
This is relevant, useful information.
The most time-consuming part of our commitment to Representation, Equity, and Inclusion has been responding to hostile communications in the EA community about the topic, such as this one.
Perhaps the objection is to ACE’s description of the OP as “hostile”? I certainly didn’t think the OP was hostile, so if that’s the concern, I would agree, but...
We prefer to use our time and generously donated funds towards our core programs. Therefore, we will not be engaging any further in this thread.
I think this is an extremely reasonable position, and I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion. Online discussions are very often terrible and I think it’s a problem if we have a norm that requires people or organizations to publicly engage with any online discussion that mentions them.
I didn’t downvote (because as you say it’s providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I’m reminded of “missing moods”; it seems like there’s a legitimate position of “it would be great to have time to hash this out but unfortunately we find it super time consuming so we’re not going to”, but it would naturally come with a mood of sadness that there wasn’t time to get into things, whereas the mood here feels more like “why do we have to put up with you morons posting inaccurate critiques?”. And perhaps that’s a reasonable position, but it at least leaves a kind of bad taste.
I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn’t seem at all hostile to me, and asserting it is, feels like it’s violating some pretty important norms about not escalating conflict and engaging with people charitably.
I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it’s pretty important for people to make themselves available for communication.
Importantly, the above also doesn’t highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying “we are worried about this conversation being difficult to have in public, please reach out to us via these other channels if you think we are causing harm”. Instead it just declares a broad swath of communication “hostile” and doesn’t provide any path forward for concerns to be addressed. That strikes me as quite misguided given the really substantial stakes of shared reputational, financial, and talent-related resources that ACE is sharing with the rest of the EA community.
I mean, it’s fine if ACE doesn’t want to coordinate with the rest of the EA community, but I do think that currently, unless something very substantial changes, ACE and the rest of EA are drawing from shared resource pools and need to coordinate somehow if we want to avoid tragedies of the commons.
The comment it is replying to doesn’t seem at all hostile to me
(I mostly agree with your comment, but note that from the wording of ACE’s comment it isn’t clear to me if (a) they think that Jakub’s comment is hostile or (b) that Hypatia’s OP is hostile, or (c) that the whole discussion is hostile or whatever. To be clear, I think that kind of ambiguity is also a strike against that comment.)
Oh, yeah, that’s fair. I had interpreted it as referring to Jakub’s comment. I think there is a slightly stronger case to call Hypatia’s post hostile than Jakub’s comment, but in either case the statement feels pretty out of place.
Yeah, I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn’t seem at all hostile to me, and asserting it is, feels like it’s violating some pretty important norms about not escalating conflict and engaging with people charitably.
Yeah—I mostly agree with this.
I think it’s pretty important for people to make themselves available for communication.
Are you sure that they’re not available for communication? I know approximately nothing about ACE, but I’d surprised if they wouldn’t be willing to talk to you after e.g. sending them an email.
Importantly, the above also doesn’t highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying “we are worried about this conversation being difficult to have in public, please reach out to us via these other channels if you think we are causing harm”. Instead it just declares a broad swath of potential communication “hostile” and doesn’t provide any path forward for concerns to be addressed. That strikes me as quite misguided given the really substantial stakes of shared reputational, financial, and talent-related resources that ACE is sharing with the rest of the EA community.
I’m a bit skeptical of this sort of “well, if they’d also said X then it would be okay” argument. I think we should generally try to be charitable in interpreting unspecified context rather than assume the worst. I also think there’s a strong tendency for goalpost-moving with this sort of objection—are you sure that, if they had said more things along those lines, you wouldn’t still have objected?
I mean, it’s fine if ACE doesn’t want to coordinate with the rest of the EA community, but I do think that currently, unless something very substantial changes, ACE and the rest of EA are drawing from shared resource pools and need to coordinate somehow if we want to avoid tragedies of the commons.
To be clear, I don’t have a problem with this post existing—I think it’s perfectly reasonable for Hypatia to present their concerns regarding ACE in a public forum so that the EA community can discuss and coordinate around what to do regarding those concerns. What I have a problem with is the notion that we should punish ACE for not responding to those accusations—I don’t think they should have an obligation to respond, and I don’t think we should assume the worst about them from their refusal to do so (nor should we always assume the best, I think the correct response is to be charitable but uncertain).
I also think there’s a strong tendency for goalpost-moving with this sort of objection—are you sure that, if they had said more things along those lines, you wouldn’t still have objected?
I do think I would have still found it pretty sad for them to not respond, because I do really care about our public discourse and this issue feels important to me, but I do think I would feel substantially less bad about it, and probably would only have mild-downvoted the comment instead of strong-downvoted it.
What I have a problem with is the notion that we should punish ACE for not responding to those accusations—I don’t think they should have an obligation to respond
I mean, I do think they have a bit of an obligation to respond? Like, I don’t know what you mean by obligation, like, I don’t think they are necessarily morally bad people, but I do think that it sure costs me and others a bunch for them to not respond and makes overall coordinating harder.
As an example, I sometimes have to decide which organizations to invite to events that I am organizing that help people in the EA community coordinate (historically things like the EA Leaders Retreat or EA Global, now it’s more informal retreats and one-off things). The things discussed here feel like decent arguments to reduce those invites some amount, since I do think it’s evidence that ACE’s culture isn’t a good fit for events like that. I would have liked ACE to respond to these accusations, and additionally, I would have liked ACE to respond to them publicly so I don’t have to justify my invite to other attendees who don’t know what their response was, even if I had reached out in private.
In a hypothetical world where we had great private communication channels and I could just ask ACE a question in some smaller higher-trust circle of people who would go to the EA Leaders forum, or tend to attend whatever retreats and events I am running, then sure, that might be fine. But we don’t have those channels, and the only way I know to establish common-knowledge in basically any group larger than 20 people within the EA community is to have it be posted publicly. And that means having private communication makes a lot of stuff like this really hard.
To be clear, I think it’s perfectly reasonable for you to want ACE to respond if you expect that information to be valuable. The question is what you do when they don’t respond. The response in that situation that I’m advocating for is something like “they chose not to respond, so I’ll stick with my previous best guess” rather than “they chose not to respond, therefore that says bad things about them, so I’ll update negatively.” I think that the latter response is not only corrosive in terms of pushing all discussion into the public sphere even when that makes it much worse, but it also hurts people’s ability to feel comfortably holding onto non-public information.
“they chose not to respond, therefore that says bad things about them, so I’ll update negatively.” I think that the latter response is not only corrosive in terms of pushing all discussion into the public sphere even when that makes it much worse, but it also hurts people’s ability to feel comfortably holding onto non-public information.
This feels wrong from two perspectives:
It clearly is actual, boring, normal, bayesian evidence that they don’t have a good response. It’s not overwhelming evidence, but someone declining to respond sure is screening off the worlds where they had a great low-inferential distance reply that was cheap to shoot off that addressed all the concerns. Of course I am going to update on that.
I do just actually think there is a tragedy of the commons scenario with public information, and for proper information flow you need some incentives to publicize information. You and me have longstanding disagreements on the right architecture here, but from my perspective of course you want to reward organization for being transparent and punish organizations if they are being exceptionally non-transparent. I definitely prefer to join social groups that have norms of information sharing among its members, and where its members invest substantial resources to share important information with others, and where you don’t get to participate in the commons if you don’t invest an adequate amount of resources into sharing important information and responding to important arguments.
It clearly is actual, boring, normal, bayesian evidence that they don’t have a good response. It’s not overwhelming evidence, but someone declining to respond sure is screening off the worlds where they had a great low-inferential distance reply that was cheap to shoot off that addressed all the concerns. Of course I am going to update on that.
I think that you need to be quite careful with this sort of naive-CDT-style reasoning. Pre-commitments/norms against updating on certain types of evidence can be quite valuable—it is just not the case that you should always update on all evidence available to you.[1]
I agree the calculation isn’t super straightforward, and there is a problem of disincentivizing glomarization here, but I do think overall, all things considered, after having thought about situations pretty similar to this for a few dozen hours, I am pretty confident it’s still decent bayesian evidence, and I endorse treating it as bayesian evidence (though I do think the pre-commitment consideration dampen the degree to which I am going to act on that information a bit, though not anywhere close to fully).
I disagree, obviously, though I suspect that little will be gained by hashing it out in more here. To be clear, I have certainly thought about this sort of issue in great detail as well.
I would be curious to read more about your approach, perhaps in another venue. Some questions I have:
Do you propose to apply this (not updating when an organization refuses to engage with public criticism) universally? For example would you really not have thought worse of MIRI (Singularity Institute at the time) if it had labeled Holden Karnofsky’s public criticism “hostile” and refused to respond to it, citing that its time could be better spent elsewhere? If not, how do you decide when to apply this policy? If yes, how do you prevent bad actors from taking advantage of the norm to become immune to public criticism?
Would you update in a positive direction if an organization does effectively respond to public criticism? If not that seems extremely strange/counterintuitive, but if yes I suspect that might lead to dynamic inconsistencies in one’s decision making (although I haven’t thought about this deeply).
Do you update on the existence of the criticism itself, before knowing whether or how the organization has chosen to respond?
I guess in general I’m pretty confused about what your proposed policy or norm is, and would appreciate some kind of thought-out exposition.
For example would you really not have thought worse of MIRI (Singularity Institute at the time) if it had labeled Holden Karnofsky’s public criticism “hostile” and refused to respond to it, citing that its time could be better spent elsewhere?
To be clear, I think that ACE calling the OP “hostile” is a pretty reasonable thing to judge them for. My objection is only to judging them for the part where they don’t want to respond any further. So as for the example, I definitely would have thought worse of MIRI if they had labeled Holden’s criticisms as “hostile”—but not just for not responding. Perhaps a better example here would be MIRI still not having responded to Paul’s arguments for slow takeoff—imo, I think Paul’s arguments should update you, but MIRI not having responded shouldn’t.
Would you update in a positive direction if an organization does effectively respond to public criticism?
I think you should update on all the object-level information that you have, but not update on the meta-level information coming from an inference like “because they chose not to say something here, that implies they don’t have anything good to say.”
Do you update on the existence of the criticism itself, before knowing whether or how the organization has chosen to respond?
Still pretty unclear about your policy. Why is ACE calling the OP “hostile” not considered “meta-level” and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they’re not responding, but doesn’t address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?
ETA: It looks like MIRI did give at least a short object-level reply to Paul’s takeoff speed argument along with a meta-level explanation of why they haven’t given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they’re not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?
I think you’re imagining that I’m doing something much more exotic here than I am. I’m basically just advocating for cooperating on what I see as a prisoner’s-dilemma-style game (I’m sure you can also cast it as a stag hunt or make some really complex game-theoretic model to capture all the nuances—I’m not trying to do that there; my point here is just to explain the sort of thing that I’m doing).
Consider:
A and B can each choose:
public) publicly argue against the other
private) privately discuss the right thing to do
And they each have utility functions such that
A = public; B = private:
u_A = 3
u_B = 0
Why: A is able to argue publicly that A is better than B and therefore gets a bunch of resources, but this costs resources and overall some of their shared values are destroyed due to public argument not directing resources very effectively.
A = private; B = public:
u_A = 0
u_B = 3
Why: ditto except the reverse.
A = public; B = public:
u_A = 1
u_B = 1
Why: Both A and B argue publicly that they’re better than each other, which consumes a bunch of resources and leads to a suboptimal allocation.
A = private; B = private:
u_A = 2
u_B = 2
Why: Neither A nor B argue publicly that they’re better than each other, not consuming as many resources and allowing for a better overall resource allocation.
Then, I’m saying that in this sort of situation you should play (private) rather than (public)—and that therefore we shouldn’t punish people for playing (private), since punishing people for playing (private) has the effect of forcing us to Nash and ensuring that people always play (public), destroying overall welfare.
(It seems that you’re switching the topic from what your policy is exactly, which I’m still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)
I think I may just outright disagree with your model here, since it seems that you’re not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).
Another crux may be your statement “Online discussions are very often terrible” in your original comment, which has not been my experience if we’re talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.
But even when online discussions are “terrible”, that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.
Finally, even if your model is a good one in general, it’s not clear that it’s applicable to this specific situation. It doesn’t seem like ACE is trying to “play private” as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they’re extremely confident that their own conclusions are the right ones (or at least that’s the public reason they’re giving).
To be clear, I agree with a lot of the points that you’re making—the point of sketching out that model was just to show the sort of thing I’m doing; I wasn’t actually trying to argue for a specific conclusion. The actual correct strategy for figuring out the right policy here, in my opinion, is to carefully weigh all the different considerations like the ones you’re mentioning, which—at the risk of crossing object and meta levels—I suspect to be difficult to do in a low-bandwidth online setting like this.
Maybe it’ll still be helpful to just give my take using this conversation as an example. In this situation, I expect that:
My models here are complicated enough that I don’t expect to be able to convey them here to a point where you’d understand them without a lot of effort.
I expect I could properly convey them in a more high-bandwidth conversation (e.g. offline, not text) with you, which I’d be willing to have with you if you wanted.
To the extent that we try to do so online, I think there are systematic biases in the format which will lead to beliefs (of at least the readers) being systematically pushed in incorrect directions—as an example, I expect arguments/positions that use simple, universalizing arguments (e.g. Bayesian reasoning says we should do this, therefore we should do it) to lose out to arguments that involve summing up a bunch of pros and cons and then concluding that the result is above or below some threshold (which in my opinion is what most actual true arguments look like).
If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, “I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion”
About online versus offline, I’m confused why you think you’d be able to convey your model offline but not online, as the bandwidth difference between the two don’t seem large enough that you could do one but not the other. Maybe it’s not just the bandwidth but other differences between the two mediums, but I’m skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it’s not clear what it would mean if you could convince someone of some idea over one medium but not the other.
If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn’t seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum’s norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)
I’m still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can’t be spread through a text medium because of its inherent bias?
Are you sure that they’re not available for communication? I know approximately nothing about ACE, but I’d surprised if they wouldn’t be willing to talk to you after e.g. sending them an email.
Yeah, I am really not sure. I will consider sending them an email. My guess is they are not interested in talking to me in a way that would later on allow me to write up what they said publicly, which would reduce the value of their response quite drastically to me. If they are happy to chat and allow me to write things up, then I might be able to make the time, but it does sound like a 5+ hour time-commitment and I am not sure whether I am up for that. Though I would be happy to pay $200 to anyone else who does that.
As Jakub has mentioned above, we have reviewed the points in his comment and fully support Anima International’s wish to share their perspective in this thread. However, Anima’s description of the events above does not align with our understanding of the events that took place, primarily within points 1,5, and 6. We have declined to include our perspective here. The most time-consuming part of our commitment to Representation, Equity, and Inclusion has been responding to hostile communications in the EA community about the topic, such as this one. We prefer to use our time and generously donated funds towards our core programs. Therefore, we will not be engaging any further in this thread.
Would you really call Jakub’s response “hostile”?
Why was this response downvoted so heavily? (This is not a rhetorical question—I’m genuinely curious what the specific reasons were.)
This is relevant, useful information.
Perhaps the objection is to ACE’s description of the OP as “hostile”? I certainly didn’t think the OP was hostile, so if that’s the concern, I would agree, but...
I think this is an extremely reasonable position, and I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion. Online discussions are very often terrible and I think it’s a problem if we have a norm that requires people or organizations to publicly engage with any online discussion that mentions them.
I didn’t downvote (because as you say it’s providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I’m reminded of “missing moods”; it seems like there’s a legitimate position of “it would be great to have time to hash this out but unfortunately we find it super time consuming so we’re not going to”, but it would naturally come with a mood of sadness that there wasn’t time to get into things, whereas the mood here feels more like “why do we have to put up with you morons posting inaccurate critiques?”. And perhaps that’s a reasonable position, but it at least leaves a kind of bad taste.
That’s a great point; I agree with that.
I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn’t seem at all hostile to me, and asserting it is, feels like it’s violating some pretty important norms about not escalating conflict and engaging with people charitably.
I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it’s pretty important for people to make themselves available for communication.
Importantly, the above also doesn’t highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying “we are worried about this conversation being difficult to have in public, please reach out to us via these other channels if you think we are causing harm”. Instead it just declares a broad swath of communication “hostile” and doesn’t provide any path forward for concerns to be addressed. That strikes me as quite misguided given the really substantial stakes of shared reputational, financial, and talent-related resources that ACE is sharing with the rest of the EA community.
I mean, it’s fine if ACE doesn’t want to coordinate with the rest of the EA community, but I do think that currently, unless something very substantial changes, ACE and the rest of EA are drawing from shared resource pools and need to coordinate somehow if we want to avoid tragedies of the commons.
(I mostly agree with your comment, but note that from the wording of ACE’s comment it isn’t clear to me if (a) they think that Jakub’s comment is hostile or (b) that Hypatia’s OP is hostile, or (c) that the whole discussion is hostile or whatever. To be clear, I think that kind of ambiguity is also a strike against that comment.)
Oh, yeah, that’s fair. I had interpreted it as referring to Jakub’s comment. I think there is a slightly stronger case to call Hypatia’s post hostile than Jakub’s comment, but in either case the statement feels pretty out of place.
Yeah—I mostly agree with this.
Are you sure that they’re not available for communication? I know approximately nothing about ACE, but I’d surprised if they wouldn’t be willing to talk to you after e.g. sending them an email.
I’m a bit skeptical of this sort of “well, if they’d also said X then it would be okay” argument. I think we should generally try to be charitable in interpreting unspecified context rather than assume the worst. I also think there’s a strong tendency for goalpost-moving with this sort of objection—are you sure that, if they had said more things along those lines, you wouldn’t still have objected?
To be clear, I don’t have a problem with this post existing—I think it’s perfectly reasonable for Hypatia to present their concerns regarding ACE in a public forum so that the EA community can discuss and coordinate around what to do regarding those concerns. What I have a problem with is the notion that we should punish ACE for not responding to those accusations—I don’t think they should have an obligation to respond, and I don’t think we should assume the worst about them from their refusal to do so (nor should we always assume the best, I think the correct response is to be charitable but uncertain).
I do think I would have still found it pretty sad for them to not respond, because I do really care about our public discourse and this issue feels important to me, but I do think I would feel substantially less bad about it, and probably would only have mild-downvoted the comment instead of strong-downvoted it.
I mean, I do think they have a bit of an obligation to respond? Like, I don’t know what you mean by obligation, like, I don’t think they are necessarily morally bad people, but I do think that it sure costs me and others a bunch for them to not respond and makes overall coordinating harder.
As an example, I sometimes have to decide which organizations to invite to events that I am organizing that help people in the EA community coordinate (historically things like the EA Leaders Retreat or EA Global, now it’s more informal retreats and one-off things). The things discussed here feel like decent arguments to reduce those invites some amount, since I do think it’s evidence that ACE’s culture isn’t a good fit for events like that. I would have liked ACE to respond to these accusations, and additionally, I would have liked ACE to respond to them publicly so I don’t have to justify my invite to other attendees who don’t know what their response was, even if I had reached out in private.
In a hypothetical world where we had great private communication channels and I could just ask ACE a question in some smaller higher-trust circle of people who would go to the EA Leaders forum, or tend to attend whatever retreats and events I am running, then sure, that might be fine. But we don’t have those channels, and the only way I know to establish common-knowledge in basically any group larger than 20 people within the EA community is to have it be posted publicly. And that means having private communication makes a lot of stuff like this really hard.
To be clear, I think it’s perfectly reasonable for you to want ACE to respond if you expect that information to be valuable. The question is what you do when they don’t respond. The response in that situation that I’m advocating for is something like “they chose not to respond, so I’ll stick with my previous best guess” rather than “they chose not to respond, therefore that says bad things about them, so I’ll update negatively.” I think that the latter response is not only corrosive in terms of pushing all discussion into the public sphere even when that makes it much worse, but it also hurts people’s ability to feel comfortably holding onto non-public information.
This feels wrong from two perspectives:
It clearly is actual, boring, normal, bayesian evidence that they don’t have a good response. It’s not overwhelming evidence, but someone declining to respond sure is screening off the worlds where they had a great low-inferential distance reply that was cheap to shoot off that addressed all the concerns. Of course I am going to update on that.
I do just actually think there is a tragedy of the commons scenario with public information, and for proper information flow you need some incentives to publicize information. You and me have longstanding disagreements on the right architecture here, but from my perspective of course you want to reward organization for being transparent and punish organizations if they are being exceptionally non-transparent. I definitely prefer to join social groups that have norms of information sharing among its members, and where its members invest substantial resources to share important information with others, and where you don’t get to participate in the commons if you don’t invest an adequate amount of resources into sharing important information and responding to important arguments.
I think that you need to be quite careful with this sort of naive-CDT-style reasoning. Pre-commitments/norms against updating on certain types of evidence can be quite valuable—it is just not the case that you should always update on all evidence available to you.[1]
To be clear, I don’t think you need UDT or anything to handle this sort of situation, you just need CDT + the ability to make pre-commitments.
I agree the calculation isn’t super straightforward, and there is a problem of disincentivizing glomarization here, but I do think overall, all things considered, after having thought about situations pretty similar to this for a few dozen hours, I am pretty confident it’s still decent bayesian evidence, and I endorse treating it as bayesian evidence (though I do think the pre-commitment consideration dampen the degree to which I am going to act on that information a bit, though not anywhere close to fully).
I disagree, obviously, though I suspect that little will be gained by hashing it out in more here. To be clear, I have certainly thought about this sort of issue in great detail as well.
I would be curious to read more about your approach, perhaps in another venue. Some questions I have:
Do you propose to apply this (not updating when an organization refuses to engage with public criticism) universally? For example would you really not have thought worse of MIRI (Singularity Institute at the time) if it had labeled Holden Karnofsky’s public criticism “hostile” and refused to respond to it, citing that its time could be better spent elsewhere? If not, how do you decide when to apply this policy? If yes, how do you prevent bad actors from taking advantage of the norm to become immune to public criticism?
Would you update in a positive direction if an organization does effectively respond to public criticism? If not that seems extremely strange/counterintuitive, but if yes I suspect that might lead to dynamic inconsistencies in one’s decision making (although I haven’t thought about this deeply).
Do you update on the existence of the criticism itself, before knowing whether or how the organization has chosen to respond?
I guess in general I’m pretty confused about what your proposed policy or norm is, and would appreciate some kind of thought-out exposition.
To be clear, I think that ACE calling the OP “hostile” is a pretty reasonable thing to judge them for. My objection is only to judging them for the part where they don’t want to respond any further. So as for the example, I definitely would have thought worse of MIRI if they had labeled Holden’s criticisms as “hostile”—but not just for not responding. Perhaps a better example here would be MIRI still not having responded to Paul’s arguments for slow takeoff—imo, I think Paul’s arguments should update you, but MIRI not having responded shouldn’t.
I think you should update on all the object-level information that you have, but not update on the meta-level information coming from an inference like “because they chose not to say something here, that implies they don’t have anything good to say.”
Yes.
Still pretty unclear about your policy. Why is ACE calling the OP “hostile” not considered “meta-level” and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they’re not responding, but doesn’t address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?
Also, overall, is you policy intended to satisfy Conservation of Expected Evidence, or not?
ETA: It looks like MIRI did give at least a short object-level reply to Paul’s takeoff speed argument along with a meta-level explanation of why they haven’t given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they’re not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?
I think you’re imagining that I’m doing something much more exotic here than I am. I’m basically just advocating for cooperating on what I see as a prisoner’s-dilemma-style game (I’m sure you can also cast it as a stag hunt or make some really complex game-theoretic model to capture all the nuances—I’m not trying to do that there; my point here is just to explain the sort of thing that I’m doing).
Consider:
A and B can each choose:
public) publicly argue against the other
private) privately discuss the right thing to do
And they each have utility functions such that
A = public; B = private:
u_A = 3
u_B = 0
Why: A is able to argue publicly that A is better than B and therefore gets a bunch of resources, but this costs resources and overall some of their shared values are destroyed due to public argument not directing resources very effectively.
A = private; B = public:
u_A = 0
u_B = 3
Why: ditto except the reverse.
A = public; B = public:
u_A = 1
u_B = 1
Why: Both A and B argue publicly that they’re better than each other, which consumes a bunch of resources and leads to a suboptimal allocation.
A = private; B = private:
u_A = 2
u_B = 2
Why: Neither A nor B argue publicly that they’re better than each other, not consuming as many resources and allowing for a better overall resource allocation.
Then, I’m saying that in this sort of situation you should play (private) rather than (public)—and that therefore we shouldn’t punish people for playing (private), since punishing people for playing (private) has the effect of forcing us to Nash and ensuring that people always play (public), destroying overall welfare.
(It seems that you’re switching the topic from what your policy is exactly, which I’m still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)
I think I may just outright disagree with your model here, since it seems that you’re not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).
Another crux may be your statement “Online discussions are very often terrible” in your original comment, which has not been my experience if we’re talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.
But even when online discussions are “terrible”, that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.
Finally, even if your model is a good one in general, it’s not clear that it’s applicable to this specific situation. It doesn’t seem like ACE is trying to “play private” as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they’re extremely confident that their own conclusions are the right ones (or at least that’s the public reason they’re giving).
To be clear, I agree with a lot of the points that you’re making—the point of sketching out that model was just to show the sort of thing I’m doing; I wasn’t actually trying to argue for a specific conclusion. The actual correct strategy for figuring out the right policy here, in my opinion, is to carefully weigh all the different considerations like the ones you’re mentioning, which—at the risk of crossing object and meta levels—I suspect to be difficult to do in a low-bandwidth online setting like this.
Maybe it’ll still be helpful to just give my take using this conversation as an example. In this situation, I expect that:
My models here are complicated enough that I don’t expect to be able to convey them here to a point where you’d understand them without a lot of effort.
I expect I could properly convey them in a more high-bandwidth conversation (e.g. offline, not text) with you, which I’d be willing to have with you if you wanted.
To the extent that we try to do so online, I think there are systematic biases in the format which will lead to beliefs (of at least the readers) being systematically pushed in incorrect directions—as an example, I expect arguments/positions that use simple, universalizing arguments (e.g. Bayesian reasoning says we should do this, therefore we should do it) to lose out to arguments that involve summing up a bunch of pros and cons and then concluding that the result is above or below some threshold (which in my opinion is what most actual true arguments look like).
If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, “I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion”
About online versus offline, I’m confused why you think you’d be able to convey your model offline but not online, as the bandwidth difference between the two don’t seem large enough that you could do one but not the other. Maybe it’s not just the bandwidth but other differences between the two mediums, but I’m skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it’s not clear what it would mean if you could convince someone of some idea over one medium but not the other.
If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn’t seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum’s norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)
I’m still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can’t be spread through a text medium because of its inherent bias?
Yeah, I am really not sure. I will consider sending them an email. My guess is they are not interested in talking to me in a way that would later on allow me to write up what they said publicly, which would reduce the value of their response quite drastically to me. If they are happy to chat and allow me to write things up, then I might be able to make the time, but it does sound like a 5+ hour time-commitment and I am not sure whether I am up for that. Though I would be happy to pay $200 to anyone else who does that.