For example would you really not have thought worse of MIRI (Singularity Institute at the time) if it had labeled Holden Karnofsky’s public criticism “hostile” and refused to respond to it, citing that its time could be better spent elsewhere?
To be clear, I think that ACE calling the OP “hostile” is a pretty reasonable thing to judge them for. My objection is only to judging them for the part where they don’t want to respond any further. So as for the example, I definitely would have thought worse of MIRI if they had labeled Holden’s criticisms as “hostile”—but not just for not responding. Perhaps a better example here would be MIRI still not having responded to Paul’s arguments for slow takeoff—imo, I think Paul’s arguments should update you, but MIRI not having responded shouldn’t.
Would you update in a positive direction if an organization does effectively respond to public criticism?
I think you should update on all the object-level information that you have, but not update on the meta-level information coming from an inference like “because they chose not to say something here, that implies they don’t have anything good to say.”
Do you update on the existence of the criticism itself, before knowing whether or how the organization has chosen to respond?
Still pretty unclear about your policy. Why is ACE calling the OP “hostile” not considered “meta-level” and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they’re not responding, but doesn’t address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?
ETA: It looks like MIRI did give at least a short object-level reply to Paul’s takeoff speed argument along with a meta-level explanation of why they haven’t given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they’re not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?
I think you’re imagining that I’m doing something much more exotic here than I am. I’m basically just advocating for cooperating on what I see as a prisoner’s-dilemma-style game (I’m sure you can also cast it as a stag hunt or make some really complex game-theoretic model to capture all the nuances—I’m not trying to do that there; my point here is just to explain the sort of thing that I’m doing).
Consider:
A and B can each choose:
public) publicly argue against the other
private) privately discuss the right thing to do
And they each have utility functions such that
A = public; B = private:
u_A = 3
u_B = 0
Why: A is able to argue publicly that A is better than B and therefore gets a bunch of resources, but this costs resources and overall some of their shared values are destroyed due to public argument not directing resources very effectively.
A = private; B = public:
u_A = 0
u_B = 3
Why: ditto except the reverse.
A = public; B = public:
u_A = 1
u_B = 1
Why: Both A and B argue publicly that they’re better than each other, which consumes a bunch of resources and leads to a suboptimal allocation.
A = private; B = private:
u_A = 2
u_B = 2
Why: Neither A nor B argue publicly that they’re better than each other, not consuming as many resources and allowing for a better overall resource allocation.
Then, I’m saying that in this sort of situation you should play (private) rather than (public)—and that therefore we shouldn’t punish people for playing (private), since punishing people for playing (private) has the effect of forcing us to Nash and ensuring that people always play (public), destroying overall welfare.
(It seems that you’re switching the topic from what your policy is exactly, which I’m still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)
I think I may just outright disagree with your model here, since it seems that you’re not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).
Another crux may be your statement “Online discussions are very often terrible” in your original comment, which has not been my experience if we’re talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.
But even when online discussions are “terrible”, that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.
Finally, even if your model is a good one in general, it’s not clear that it’s applicable to this specific situation. It doesn’t seem like ACE is trying to “play private” as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they’re extremely confident that their own conclusions are the right ones (or at least that’s the public reason they’re giving).
To be clear, I agree with a lot of the points that you’re making—the point of sketching out that model was just to show the sort of thing I’m doing; I wasn’t actually trying to argue for a specific conclusion. The actual correct strategy for figuring out the right policy here, in my opinion, is to carefully weigh all the different considerations like the ones you’re mentioning, which—at the risk of crossing object and meta levels—I suspect to be difficult to do in a low-bandwidth online setting like this.
Maybe it’ll still be helpful to just give my take using this conversation as an example. In this situation, I expect that:
My models here are complicated enough that I don’t expect to be able to convey them here to a point where you’d understand them without a lot of effort.
I expect I could properly convey them in a more high-bandwidth conversation (e.g. offline, not text) with you, which I’d be willing to have with you if you wanted.
To the extent that we try to do so online, I think there are systematic biases in the format which will lead to beliefs (of at least the readers) being systematically pushed in incorrect directions—as an example, I expect arguments/positions that use simple, universalizing arguments (e.g. Bayesian reasoning says we should do this, therefore we should do it) to lose out to arguments that involve summing up a bunch of pros and cons and then concluding that the result is above or below some threshold (which in my opinion is what most actual true arguments look like).
If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, “I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion”
About online versus offline, I’m confused why you think you’d be able to convey your model offline but not online, as the bandwidth difference between the two don’t seem large enough that you could do one but not the other. Maybe it’s not just the bandwidth but other differences between the two mediums, but I’m skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it’s not clear what it would mean if you could convince someone of some idea over one medium but not the other.
If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn’t seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum’s norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)
I’m still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can’t be spread through a text medium because of its inherent bias?
To be clear, I think that ACE calling the OP “hostile” is a pretty reasonable thing to judge them for. My objection is only to judging them for the part where they don’t want to respond any further. So as for the example, I definitely would have thought worse of MIRI if they had labeled Holden’s criticisms as “hostile”—but not just for not responding. Perhaps a better example here would be MIRI still not having responded to Paul’s arguments for slow takeoff—imo, I think Paul’s arguments should update you, but MIRI not having responded shouldn’t.
I think you should update on all the object-level information that you have, but not update on the meta-level information coming from an inference like “because they chose not to say something here, that implies they don’t have anything good to say.”
Yes.
Still pretty unclear about your policy. Why is ACE calling the OP “hostile” not considered “meta-level” and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they’re not responding, but doesn’t address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?
Also, overall, is you policy intended to satisfy Conservation of Expected Evidence, or not?
ETA: It looks like MIRI did give at least a short object-level reply to Paul’s takeoff speed argument along with a meta-level explanation of why they haven’t given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they’re not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?
I think you’re imagining that I’m doing something much more exotic here than I am. I’m basically just advocating for cooperating on what I see as a prisoner’s-dilemma-style game (I’m sure you can also cast it as a stag hunt or make some really complex game-theoretic model to capture all the nuances—I’m not trying to do that there; my point here is just to explain the sort of thing that I’m doing).
Consider:
A and B can each choose:
public) publicly argue against the other
private) privately discuss the right thing to do
And they each have utility functions such that
A = public; B = private:
u_A = 3
u_B = 0
Why: A is able to argue publicly that A is better than B and therefore gets a bunch of resources, but this costs resources and overall some of their shared values are destroyed due to public argument not directing resources very effectively.
A = private; B = public:
u_A = 0
u_B = 3
Why: ditto except the reverse.
A = public; B = public:
u_A = 1
u_B = 1
Why: Both A and B argue publicly that they’re better than each other, which consumes a bunch of resources and leads to a suboptimal allocation.
A = private; B = private:
u_A = 2
u_B = 2
Why: Neither A nor B argue publicly that they’re better than each other, not consuming as many resources and allowing for a better overall resource allocation.
Then, I’m saying that in this sort of situation you should play (private) rather than (public)—and that therefore we shouldn’t punish people for playing (private), since punishing people for playing (private) has the effect of forcing us to Nash and ensuring that people always play (public), destroying overall welfare.
(It seems that you’re switching the topic from what your policy is exactly, which I’m still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)
I think I may just outright disagree with your model here, since it seems that you’re not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).
Another crux may be your statement “Online discussions are very often terrible” in your original comment, which has not been my experience if we’re talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.
But even when online discussions are “terrible”, that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.
Finally, even if your model is a good one in general, it’s not clear that it’s applicable to this specific situation. It doesn’t seem like ACE is trying to “play private” as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they’re extremely confident that their own conclusions are the right ones (or at least that’s the public reason they’re giving).
To be clear, I agree with a lot of the points that you’re making—the point of sketching out that model was just to show the sort of thing I’m doing; I wasn’t actually trying to argue for a specific conclusion. The actual correct strategy for figuring out the right policy here, in my opinion, is to carefully weigh all the different considerations like the ones you’re mentioning, which—at the risk of crossing object and meta levels—I suspect to be difficult to do in a low-bandwidth online setting like this.
Maybe it’ll still be helpful to just give my take using this conversation as an example. In this situation, I expect that:
My models here are complicated enough that I don’t expect to be able to convey them here to a point where you’d understand them without a lot of effort.
I expect I could properly convey them in a more high-bandwidth conversation (e.g. offline, not text) with you, which I’d be willing to have with you if you wanted.
To the extent that we try to do so online, I think there are systematic biases in the format which will lead to beliefs (of at least the readers) being systematically pushed in incorrect directions—as an example, I expect arguments/positions that use simple, universalizing arguments (e.g. Bayesian reasoning says we should do this, therefore we should do it) to lose out to arguments that involve summing up a bunch of pros and cons and then concluding that the result is above or below some threshold (which in my opinion is what most actual true arguments look like).
If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, “I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion”
About online versus offline, I’m confused why you think you’d be able to convey your model offline but not online, as the bandwidth difference between the two don’t seem large enough that you could do one but not the other. Maybe it’s not just the bandwidth but other differences between the two mediums, but I’m skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it’s not clear what it would mean if you could convince someone of some idea over one medium but not the other.
If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn’t seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum’s norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)
I’m still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can’t be spread through a text medium because of its inherent bias?