I think it’s good and important to form views about people’s strengths, weaknesses, values and character. However, I am generally against forming negative views of people (on any of these dimensions) based on seemingly incorrect, poorly reasoned, or seemingly bad-values-driven public statements. When a public statement is not misleading or tangibly harmful, I generally am open to treating it as a positive update on the person making the statement, but not to treating it as worse news about them than if they had simply said nothing.
The basic reasons for this attitude are:
I think it is very easy to be wrong about the implications of someone’s public statement. It could be that their statement was poorly expressed, or aimed at another audience; that the reader is failing to understand subtleties of it; or that the statement is in fact wrong, but that it merely reflects that the person who made it hasn’t been sufficiently reflective or knowledgeable on the topic yet (and could become so later).
I think public discourse would be less costly and more productive for everyone if the attitude I take were more common. I think that one of the best ways to learn is to share one’s impressions, even (especially) when they might be badly wrong. I wish that public discourse could include more low-caution exploration, without the risks that currently come with such things.
I generally believe in evaluating people based on what they’ve accomplished and what they’ve had the opportunity to accomplish, plus any tangible harm (including misinformation) they’ve caused. I think this approach works well for identifying people who are promising and people whom I should steer clear of; I think other methods add little of value and mostly add noise.
I update negatively on people who mislead (including expressing great confidence while being wrong, and especially including avoidable mischaracterizations of others’ views); people who do tangible damage (usually by misleading); and people who create little of value despite large amounts of opportunity and time investment. But if someone is simply expressing a view and being open about their reasons for holding it, I try (largely successfully, I think) not to make any negative updates simply based on the substance.
FWIW I’m somewhat more judgemental than Holden, but I think the position Holden advocates is not that unusual for seniorish EAs.
Although leaders may say “I won’t judge or punish you if you disagree with me”, listeners are probably correct to interpret that as cheap talk. We have abundant evidence from society and history that those in positions of power can and do act against them. A few remarks to the contrary should not convince people they are not at risk.
Someone who genuinely wanted to be open to criticism would recognise and address the fears people have about speaking up. Buck’s comment of “the fact that people want to hide their identities is not strong evidence they need to” struck me as highly dismissive. If people do fear something, saying”well, you shouldn’t be scared” doesn’t generally make them less scared, but it does convey that you don’t care about them—you won’t expend effort to address their fears.
Although leaders may say “I won’t judge or punish you if you disagree with me”, listeners are probably correct to interpret that as cheap talk.
GiveWell liked your criticism so much they literally started a contest to get more like it and gave you tens of thousands of dollars.
I’m trying to read your comment charitably but come on. Saying this quote is “cheap talk” when you’ve personally profited from it not being cheap talk is unfair to the point of being actively deceptive.
This is a conflation of technical criticism (e.g. you critique a methodology or offer scientific evidence to the contrary) and office politics criticism (e.g. you point out a conflict of interest or question a power dynamic)
Plant made a technical criticism, whereas office politics disagreement is the one that potentially carries social repercussions.
Besides, ea orgs aren’t the only party that matters- the media reads this forum too, i can see how someone might not want a workplace conflict to become their top Google result.
Hmm. I guess I was thinking about this in general, rather than my own case. That said, I don’t think there’s any contradiction between there being visible financial prizes for criticism and for people to still rationally think that (some form of) criticise will get you in trouble. Costly signals may reduce fears, but that doesn’t mean they will remove or reverse them. Seems worth noting that there has just been a big EA critiques prize and people are presently using burner accounts.
Buck’s comment of “the fact that people want to hide their identities is not strong evidence they need to” struck me as highly dismissive. If people do fear something, saying”well, you shouldn’t be scared” doesn’t generally make them less scared, but it does convey that you don’t care about them—you won’t expend effort to address their fears.
But Buck wasn’t saying you shouldn’t be scared? He was just saying that high burner count isn’t much evidence for this.
Precisely, I think he was claiming that p(lots of burners | hiding identity is important) and p(lots of burners | hiding identity isn’t important) are pretty close.
I interpreted this as a pretty decoupled claim. (I do think a disclaimer might have been good.)
Now, this second comment (which is the root comment here) does try to argue you shouldn’t be worried, at least from Holden and somewhat from buck.
Any ideas for more costly signals? One failure mode, the one of “contrarian connie has a take that no one thinks is very good but it is such a thorough and comprehensive take that it would look virtuous of us to hire a critic”, seems broadly bad. Corollary of the badness of this failure mode, from the connie’s perspective, is there may not be a principled classifier that sorts out “disagrees with my take” from “retaliation for challenging the status quo”.
[EDIT: I’m getting disagrees and I’d really appreciate if people could explain how I’m wrong that posting controversial things under a real name is better, in expected value terms? Likelihood of pros vs likelihood of cons? Or tell me which other piece you disagree with?]
In that case, here’s a conflicting claim to the contrary, which I believe it is easy to find evidence of: We are in a social movement where you get social status for being critical, attempting to solve problems proactively, and going against the grain, you get extra status for doing it bravely and publicly (as opposed to in the backrooms or something), you also get (heaps of) social status for admitting you were wrong and redacting your claim, and you get points for doing conversation well.
So, here are 4 scenarios I see (which again I’m not collecting evidence of but I believe it is there for all to see): 1. If you use your real name to write a criticism and it is well received, that’s a win.
2. If you use your real name to post a criticism and it is not well recieved, and you are convinced you were wrong, you can post a redaction, or do both edit the top of your post and add a few comments saying commentors were right. You can also DM people thanks for changing your mind. You will get points for epistemic humility and bringing issues to light so they can be addressed, and that’s a win.
3. If you use your real name to post criticism and it is not well recieved, but you still believe your own side, then you won’t be the only one to believe it. You get to have your name attached to the idea and people who still inevitably agree with you can reach out and give you opportunties. Plus you can feel liberated to put your energy elsewhere. Why would you want to work with people who don’t agree with you in cases relevant to your work? And if it isn’t relevant to your work, oh well. Now, here’s the real win: if you are proven right in the longrun! Think of the points in and outside the movement. Example: Imagine if someone had posted a public complaint of SBF and FTX before the crisis (and it had been ignored). Damn, that person would have gotten multiple journalist requests and EAs would be like “teach us your forecasting skillz pls”
4. If you use your real name to write criticism and the response is complex and answer TBD, see the last sentence of #2. Also, if you use the conversation well, every thoughtful, epistemically humble comment you make will get you points, as will your post overall.
As I said these are examples not really evidence, but these are things I see happen and I think it is easy to find evidence of these if you search for them.
I expect comparatively little benefit to posting under a pseudonym. If you (reader) think these benefits are fake/overblown, or you’d get more benefit from anon posting still, or you expect retribution on net (rather than status), I really don’t get why you like and believe in this movement tbh. I’d just throw it out if I were you. I think effective altruism’s commitment to epistemics and judging people on their merits with doing good are where its strength lies. If you don’t trust people in the movement to do the former, idk what to say.
Hello Ivy! I think you’ve missed at least one scenario, which is where you use your real name, your criticism is not well received, you have identified yourself as a troublemaker, and those in positions of power torpedo you. Surely this is a possibility? Unless people think it’s a serious possibility, it’s hard to make sense of why people write things anonymously, or just stay silent.
Honestly I don’t think this is a good enough reason to post anonymously though… [Edit: In that I don’t think this risk makes the expected value of using your name negative. Even if there is a serious chance, it’s surely a relatively small one compared to other outcomes. Above I was attempting to show that there are pros to posting anonymously which outweigh the cons in expected value terms. I don’t think that changes the calculus.]
I also think it may not be a con exactly if that happens. Or it’s a pro and con nested together? Because I think EAs torpedoing people would be a serious issue that would need revealing. I don’t get why people wouldn’t want to use this moment to test the toxicity of the movement they want to be in tbh. Why would you just… not want to know if you’d be torpedoed? In that case it falls within #3 still: “If your critique is not well received… You can feel liberated to put your energy elsewhere.”
And if you get torpedoed, you can always write an expose about that and help people who might be in a similar situation in the future!
Maybe someday I’ll be torpedoed tbh. If this is a thing that leaders will do to anyone, I’m probably weird enough and have done misinterpretable enough things. Maybe I’d deserve it and maybe not, but either way I’d like to know if it’s going to happen so that I should move on. In case you can’t tell, I have already had concerns like the anons seem to have, but I’ve decided I wouldn’t like thinking “Is EA for me or not?” It’s not pleasant or motivating. Honestly it’s amazing how much that question drags down your potential. I want the answer not to live with the question. As long as I don’t get torpedoed when being myself (or develop serious problems with the movement), that answer is yes. Which is just much more workable. Why would I not want to know? EA is not like, sacred. It can and should be thrown out of your roster if it’s a bad fit or a problematic entity.
[Edit: I wonder if EAs just need to be taught these sorts of calculations?]
We have abundant evidence from society and history that those in positions of power can and do act against them.
Yes, this seems like the human default, and I think anyone who claims the default doesn’t apply to them bears the burden of proof to demonstrate that.
If people like Buck want to convince me that they’re different, the best way to do it would be to give a satisfactory theory of why this happens, then explain the specific measures they take and why they believe those measures are effective (e.g. “this RCT found that the use of Technique X produced a large effect size; I always use Technique X in situations A, B, C”). A person who’s succeeded in solving a problem should be able to demonstrate understanding of both the problem and its solution.
“say things that feel true but take actual bravery” (as opposed to “perceived bravery” where they’re mis-estimating how unpopular a sentiment is) is definitely a high variance strategy for building relationships in which you’re valued and respected, unavailable to the risk-intolerant, can backfire.
Holden Karnofsky on evaluating people based on public discourse:
FWIW I’m somewhat more judgemental than Holden, but I think the position Holden advocates is not that unusual for seniorish EAs.
Buck seems to be consistently missing the point.
Although leaders may say “I won’t judge or punish you if you disagree with me”, listeners are probably correct to interpret that as cheap talk. We have abundant evidence from society and history that those in positions of power can and do act against them. A few remarks to the contrary should not convince people they are not at risk.
Someone who genuinely wanted to be open to criticism would recognise and address the fears people have about speaking up. Buck’s comment of “the fact that people want to hide their identities is not strong evidence they need to” struck me as highly dismissive. If people do fear something, saying”well, you shouldn’t be scared” doesn’t generally make them less scared, but it does convey that you don’t care about them—you won’t expend effort to address their fears.
GiveWell liked your criticism so much they literally started a contest to get more like it and gave you tens of thousands of dollars.
I’m trying to read your comment charitably but come on. Saying this quote is “cheap talk” when you’ve personally profited from it not being cheap talk is unfair to the point of being actively deceptive.
This is a conflation of technical criticism (e.g. you critique a methodology or offer scientific evidence to the contrary) and office politics criticism (e.g. you point out a conflict of interest or question a power dynamic)
Plant made a technical criticism, whereas office politics disagreement is the one that potentially carries social repercussions.
Besides, ea orgs aren’t the only party that matters- the media reads this forum too, i can see how someone might not want a workplace conflict to become their top Google result.
Hmm. I guess I was thinking about this in general, rather than my own case. That said, I don’t think there’s any contradiction between there being visible financial prizes for criticism and for people to still rationally think that (some form of) criticise will get you in trouble. Costly signals may reduce fears, but that doesn’t mean they will remove or reverse them. Seems worth noting that there has just been a big EA critiques prize and people are presently using burner accounts.
But Buck wasn’t saying you shouldn’t be scared? He was just saying that high burner count isn’t much evidence for this.
Precisely, I think he was claiming that p(lots of burners | hiding identity is important) and p(lots of burners | hiding identity isn’t important) are pretty close.
I interpreted this as a pretty decoupled claim. (I do think a disclaimer might have been good.)
Now, this second comment (which is the root comment here) does try to argue you shouldn’t be worried, at least from Holden and somewhat from buck.
Any ideas for more costly signals? One failure mode, the one of “contrarian connie has a take that no one thinks is very good but it is such a thorough and comprehensive take that it would look virtuous of us to hire a critic”, seems broadly bad. Corollary of the badness of this failure mode, from the connie’s perspective, is there may not be a principled classifier that sorts out “disagrees with my take” from “retaliation for challenging the status quo”.
[EDIT: I’m getting disagrees and I’d really appreciate if people could explain how I’m wrong that posting controversial things under a real name is better, in expected value terms? Likelihood of pros vs likelihood of cons? Or tell me which other piece you disagree with?]
In that case, here’s a conflicting claim to the contrary, which I believe it is easy to find evidence of: We are in a social movement where you get social status for being critical, attempting to solve problems proactively, and going against the grain, you get extra status for doing it bravely and publicly (as opposed to in the backrooms or something), you also get (heaps of) social status for admitting you were wrong and redacting your claim, and you get points for doing conversation well.
So, here are 4 scenarios I see (which again I’m not collecting evidence of but I believe it is there for all to see):
1. If you use your real name to write a criticism and it is well received, that’s a win.
2. If you use your real name to post a criticism and it is not well recieved, and you are convinced you were wrong, you can post a redaction, or do both edit the top of your post and add a few comments saying commentors were right. You can also DM people thanks for changing your mind. You will get points for epistemic humility and bringing issues to light so they can be addressed, and that’s a win.
3. If you use your real name to post criticism and it is not well recieved, but you still believe your own side, then you won’t be the only one to believe it. You get to have your name attached to the idea and people who still inevitably agree with you can reach out and give you opportunties. Plus you can feel liberated to put your energy elsewhere. Why would you want to work with people who don’t agree with you in cases relevant to your work? And if it isn’t relevant to your work, oh well. Now, here’s the real win: if you are proven right in the longrun! Think of the points in and outside the movement. Example: Imagine if someone had posted a public complaint of SBF and FTX before the crisis (and it had been ignored). Damn, that person would have gotten multiple journalist requests and EAs would be like “teach us your forecasting skillz pls”
4. If you use your real name to write criticism and the response is complex and answer TBD, see the last sentence of #2. Also, if you use the conversation well, every thoughtful, epistemically humble comment you make will get you points, as will your post overall.
As I said these are examples not really evidence, but these are things I see happen and I think it is easy to find evidence of these if you search for them.
I expect comparatively little benefit to posting under a pseudonym. If you (reader) think these benefits are fake/overblown, or you’d get more benefit from anon posting still, or you expect retribution on net (rather than status), I really don’t get why you like and believe in this movement tbh. I’d just throw it out if I were you. I think effective altruism’s commitment to epistemics and judging people on their merits with doing good are where its strength lies. If you don’t trust people in the movement to do the former, idk what to say.
Hello Ivy! I think you’ve missed at least one scenario, which is where you use your real name, your criticism is not well received, you have identified yourself as a troublemaker, and those in positions of power torpedo you. Surely this is a possibility? Unless people think it’s a serious possibility, it’s hard to make sense of why people write things anonymously, or just stay silent.
Honestly I don’t think this is a good enough reason to post anonymously though… [Edit: In that I don’t think this risk makes the expected value of using your name negative. Even if there is a serious chance, it’s surely a relatively small one compared to other outcomes. Above I was attempting to show that there are pros to posting anonymously which outweigh the cons in expected value terms. I don’t think that changes the calculus.]
I also think it may not be a con exactly if that happens. Or it’s a pro and con nested together? Because I think EAs torpedoing people would be a serious issue that would need revealing. I don’t get why people wouldn’t want to use this moment to test the toxicity of the movement they want to be in tbh. Why would you just… not want to know if you’d be torpedoed? In that case it falls within #3 still: “If your critique is not well received… You can feel liberated to put your energy elsewhere.”
And if you get torpedoed, you can always write an expose about that and help people who might be in a similar situation in the future!
Maybe someday I’ll be torpedoed tbh. If this is a thing that leaders will do to anyone, I’m probably weird enough and have done misinterpretable enough things. Maybe I’d deserve it and maybe not, but either way I’d like to know if it’s going to happen so that I should move on. In case you can’t tell, I have already had concerns like the anons seem to have, but I’ve decided I wouldn’t like thinking “Is EA for me or not?” It’s not pleasant or motivating. Honestly it’s amazing how much that question drags down your potential. I want the answer not to live with the question. As long as I don’t get torpedoed when being myself (or develop serious problems with the movement), that answer is yes. Which is just much more workable. Why would I not want to know? EA is not like, sacred. It can and should be thrown out of your roster if it’s a bad fit or a problematic entity.
[Edit: I wonder if EAs just need to be taught these sorts of calculations?]
Yes, this seems like the human default, and I think anyone who claims the default doesn’t apply to them bears the burden of proof to demonstrate that.
If people like Buck want to convince me that they’re different, the best way to do it would be to give a satisfactory theory of why this happens, then explain the specific measures they take and why they believe those measures are effective (e.g. “this RCT found that the use of Technique X produced a large effect size; I always use Technique X in situations A, B, C”). A person who’s succeeded in solving a problem should be able to demonstrate understanding of both the problem and its solution.
Edit: This paper looks interesting https://pubmed.ncbi.nlm.nih.gov/22663351/
“say things that feel true but take actual bravery” (as opposed to “perceived bravery” where they’re mis-estimating how unpopular a sentiment is) is definitely a high variance strategy for building relationships in which you’re valued and respected, unavailable to the risk-intolerant, can backfire.