which starts out with “𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗵𝗮𝗻𝗴𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗺𝗶𝗻𝗱𝘀 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗲 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲!
“
and follows up with “Two mentally unwell ex-employees told dozens of falsehoods about us, but even in the darkest times, I told myself to trust that EAs/rationalists would update when they saw the evidence, and now I feel justified in that trust.
Turns out that 200+ pages of evidence showing that their accusations were false or misleading is enough for most people
“
Since I am much more of a frequent flyer on FB than on the EA Forum I wonder: Where does the 75% measure come from?
EDIT: Asking this despite the post ending with “Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.” since this doesn’t help much with deriving said 75%.
(Edit: The FB post now says “*Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.”. I don’t really see how you get that number based on vote count and karma.)
Somewhere within those wide error bars resides the truth.
Why on earth would somebody do an intro with an obviously partisan estimate like this to their own article given they are talking to the EA community and not some imbeciles?
Since I don’t presume Nonlinear to be plain old stupid I can’t wrap my head around this.
Actually it was “*Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio” with an Asterisk.
There was however no Asterisk attached to the leading claim, instead there was a party hat emoticon. Either way I didn’t feel very much informed on how the 75% claim came to be. It certainly struck me as dubious and more like a commercial and a priming which I consider especially strange when it directs to a matter like this.
Update: I changed the wording of the post to now state: 𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝘂𝗽𝘃𝗼𝘁𝗲𝗱 𝘁𝗵𝗲 𝗽𝗼𝘀𝘁, 𝘄𝗵𝗶𝗰𝗵 𝗶𝘀 𝗮 𝗿𝗲𝗮𝗹𝗹𝘆 𝗴𝗼𝗼𝗱 𝘀𝗶𝗴𝗻*
And the * at the bottom says: Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio. And of course this is not proof that everybody changed their mind. There’s a lot of reasons to upvote the post or down vote it. However, I do think it’s a good indicator.
This was a quick thing I dashed off, expecting to share this with my friends on Facebook, where I don’t spend as much time thinking about how to be completely precise. I was not expecting a stranger to post it on the Forum. When I post on the Forum, I spend more time trying to be precise and accurate. Sorry for this being communicated on the Forum in a way that I never would have posted had I consented.
I’ve turned the Facebook post to sharing only with friends (my default is sharing with everybody) because I now realize it is not safe to assume that people will not share it in settings where it’s not appropriate.
The math I did was I assumed the average voter strength was 3 (educated guess). I then took the karma and vote count, and figured out the percentage. This was for the LessWrong post as of a few hours ago. It was around 70% on the EA Forum using the same method.
I did say in the post that I “Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.”
Another way to come to that number: if the post was exactly 50⁄50 up vs downvoting, I would have zero karma. As of now, we’re at 164, so it has to be well above 50% upvote rate.
Yes. It’s not completely precise, but I do think it’s unlikely that somebody upvoted the post if they didn’t either largely update or already think that Alice and Chloe had told falsehoods and misleading claims about us.
It’s Facebook though, for my friends, not for the EA Forum. I would try to post more precise numbers here. I’m not going to do a whole mathematical model for Facebook though. This was posted here without my permission and I also said in the post that this was a napkin math guesstimate.
I think many people (including myself and people at Lightcone) upvoted this post for signal-boosting reasons, and because it seems important to share contradicting evidence whether you agree with it or not. I really don’t think upvote to downvote ratio is a reasonable estimate of “having changed their mind” in this case.
I disagree, I think it’s entirely possible to upvote things you disagree with, or to upvote the post, read it and update negatively, which is presumably not what you meant here by “people changed their minds”.
I think this is a very poor way to make this estimate for most reasonable interpretations of “people changed their minds
”. One charitable interpretation is that you genuinely believe post upvotes to represent people who agree or have updated positively, but this would be surprising to me.
One uncharitable interpretation is that this is a way of implying a consensus where it doesn’t exist, and conflating “good epistemics” with “people who agree with me”. (“75% of people agree with us! I’m so grateful that EA epistemics are trustworthy”). Doing this may create some social pressure to conform both to the majority and to people who apparently have “good epistemics”, especially given this claim came alongside the link to the EA Forum post on your FB post, and your call for action at the bottom including voting behavior. This is subtle and not necessarily what you intended, but I thought worth pointing out because the effects may exist regardless of your intentions.
On the uncharitable case: I think there are other examples in the post that seem reasonable at first glance but can be interpreted or misinterpreted as similar cases of creating some kind of social pressure to take the Nonlinear position. Some of these are are raised in Yarrow’s comment.
Others include:
“However, if Ben pulled a Geoffrey Hinton and was able to update based on new information despite massive psychological pressure against that, that would be an act of impressive epistemic virtue. As a community, we want to make it so that people are rewarded for doing the right but hard thing, and this is one of those times.”
“EA’s high trust culture, part of what makes it great, is crumbling, and “sharing only negative information about X person/charity” posts will destroy it.
“EA since FTX has trauma. We’re infected by a cancer of distrust, suspicion, and paranoia. Frequent witch burnings. Seeing ill-intent everywhere. Forbidden questions (in EA!) Forbidden thoughts (in EA!) We’re attacking each other instead of attacking the world’s problems.”
Most of the rest of the section titled “So how do we learn from this to make our community better? How can we make EA antifragile?”)
“This doesn’t mean EA is rife with abuse, it just means that EA is rife with humans. Humans with strong moral emotions and poor social skills on average. We should expect a lot of conflict. We need to find a better way to deal with this. Our community has been turning on itself with increasing ferocity, and we need to find a better way to recover from FTX. Let’s do what EA does best: optimize dispassionately, embody scout mindset, and interpret people charitably.”
On the charitable case: I think it’s fairly obvious that using post upvotes is a poor way of indicating support for the Nonlinear position, because there are a lot of reasons for upvotes (or downvotes) that are unrelated to whether voters agree or disagree with the post itself.
Skimming some comments quickly (moved to footnote for ease of reading).[1]
There are obviously problems with aggregating votes which make these hard to interpret, but even if you take a looser definition, like “75% of readers now have a better net impression of Nonlinear than after Ben Pace’s post”, this still feels very unclear to me without cherry picking comments. I’m not expecting NL to have attempted to modelling consensus with agreevotes, but I think it’s clear even on skimming that opinions here are mixed (this doesn’t discount the possibility of multiple NL staff agree/disagreevoting many of these posts or comments), and ceteris paribus make it more surprising that the 75% claim was made.
“I updated significantly in the direction of “Nonlinear leadership has a better case for themselves than I initially thought”,
“it seems likely to me that the initial post indeed was somewhat careless with fact-checking.”,
“I’m still confused about some of the fact-checking claims”, “I still find Chloe’s broad perspective credible and concerning (in a “this is difficult work environment with definite potential for toxicity” rather than “this is outright abusive on all reasonable definitions of the word”). The replies by Nonlinear leadership didn’t change my initial opinion here by too much”
I don’t have time to engage with all the evidence here, but even if I came away convinced that all of the original claims provided by Ben weren’t backed up, I still feel really uneasy about Nonlinear; uneasy about your work culture, uneasy about how you communicate and argue, and alarmed at how forcefully you attack people who criticise you.
From my perspective, this is between “not responsive to the complaint” and “evidence for the spirit of the complaint”. It seems an overreach to call “They told me not to spend time with my boyfriend...” a “sad, unbelievable lie” “discrediting [Chloe] as a reliable source of truth” when it is not something anyone has cited Chloe as saying. It seems incorrect to describe “advised not to spend time with ‘low value people’” as in “direct contradiction” with any of this, which instead seems to affirm that traveling with Nonlinear was conditioned on “high potential” or being among the “highest quality people”. Finally, having initially considered inviting Chloe’s boyfriend to travel with them would still be entirely consistent with later deciding not to; encouraging a visit in May would still be consistent with an overall expectation that Chloe not spend too much time with her boyfriend in general for reasons related to his perceived “quality”.
Whatever people think about this particular reply by Nonlinear, I hope it’s clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.
“For the most part, an initial reading of this post and the linked documents did have the intended effect on me of making me view many of the original claims as likely false or significantly exaggerated. But my own take is that the post would have been stronger had these changes been made prior to publishing. Curious to hear if others agree or disagree.”
“Overall, I think Nonlinear looks pretty good here. I definitely think they made some mistakes, especially adding members to their work+travel arrangements, but on the whole, I think they acted pretty reasonably and were unjustly vilified.”
I think the preliminary takeaway is that non-linear are largely innocent, but really bad at appearing that way. They derailed their own exoneration via a series of bizarre editorials, which do nothing but distract, borne out of (seemingly) righteous indignation
I disagree, I think it’s entirely possible to upvote things you disagree with, or to upvote the post, read it and update negatively, which is presumably not what you meant here by “people changed their minds”.
Or “agreed with Nonlinear before this post and still agrees now”. Kat’s math assumes that literally everyone agreed with Ben’s post until now.
Came here via the FB post by Kat Woods: https://www.facebook.com/katxiowoods/posts/pfbid02mbupEfdsrmkcJwmDWS3E1qmpJQBycapzeFcijhBpi7rQMVx9iHjksA9koGC9b3WCl
“which starts out with “𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗵𝗮𝗻𝗴𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗺𝗶𝗻𝗱𝘀 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗲 𝗲𝘃𝗶𝗱𝗲𝗻𝗰𝗲!
and follows up with “Two mentally unwell ex-employees told dozens of falsehoods about us, but even in the darkest times, I told myself to trust that EAs/rationalists would update when they saw the evidence, and now I feel justified in that trust.
Turns out that 200+ pages of evidence showing that their accusations were false or misleading is enough for most people
“Since I am much more of a frequent flyer on FB than on the EA Forum I wonder: Where does the 75% measure come from?
EDIT: Asking this despite the post ending with “Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.” since this doesn’t help much with deriving said 75%.
As best as I can tell, it’s made up.
(Edit: The FB post now says “*Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.”. I don’t really see how you get that number based on vote count and karma.)
Somewhere within those wide error bars resides the truth.
Why on earth would somebody do an intro with an obviously partisan estimate like this to their own article given they are talking to the EA community and not some imbeciles?
Since I don’t presume Nonlinear to be plain old stupid I can’t wrap my head around this.
In the same comment at the bottom:
”Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.”
Actually it was “*Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio” with an Asterisk.
There was however no Asterisk attached to the leading claim, instead there was a party hat emoticon. Either way I didn’t feel very much informed on how the 75% claim came to be. It certainly struck me as dubious and more like a commercial and a priming which I consider especially strange when it directs to a matter like this.
Update: I changed the wording of the post to now state: 𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝘂𝗽𝘃𝗼𝘁𝗲𝗱 𝘁𝗵𝗲 𝗽𝗼𝘀𝘁, 𝘄𝗵𝗶𝗰𝗵 𝗶𝘀 𝗮 𝗿𝗲𝗮𝗹𝗹𝘆 𝗴𝗼𝗼𝗱 𝘀𝗶𝗴𝗻*
And the * at the bottom says: Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio. And of course this is not proof that everybody changed their mind. There’s a lot of reasons to upvote the post or down vote it. However, I do think it’s a good indicator.
This was a quick thing I dashed off, expecting to share this with my friends on Facebook, where I don’t spend as much time thinking about how to be completely precise. I was not expecting a stranger to post it on the Forum. When I post on the Forum, I spend more time trying to be precise and accurate. Sorry for this being communicated on the Forum in a way that I never would have posted had I consented.
I’ve turned the Facebook post to sharing only with friends (my default is sharing with everybody) because I now realize it is not safe to assume that people will not share it in settings where it’s not appropriate.
The math I did was I assumed the average voter strength was 3 (educated guess). I then took the karma and vote count, and figured out the percentage. This was for the LessWrong post as of a few hours ago. It was around 70% on the EA Forum using the same method.
I did say in the post that I “Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio.”
Another way to come to that number: if the post was exactly 50⁄50 up vs downvoting, I would have zero karma. As of now, we’re at 164, so it has to be well above 50% upvote rate.
If I’m understanding this right, you assume that if someone upvoted the post, it’s because they changed their mind?
Yes. It’s not completely precise, but I do think it’s unlikely that somebody upvoted the post if they didn’t either largely update or already think that Alice and Chloe had told falsehoods and misleading claims about us.
It’s Facebook though, for my friends, not for the EA Forum. I would try to post more precise numbers here. I’m not going to do a whole mathematical model for Facebook though. This was posted here without my permission and I also said in the post that this was a napkin math guesstimate.
I think many people (including myself and people at Lightcone) upvoted this post for signal-boosting reasons, and because it seems important to share contradicting evidence whether you agree with it or not. I really don’t think upvote to downvote ratio is a reasonable estimate of “having changed their mind” in this case.
I disagree, I think it’s entirely possible to upvote things you disagree with, or to upvote the post, read it and update negatively, which is presumably not what you meant here by “people changed their minds”.
”. One charitable interpretation is that you genuinely believe post upvotes to represent people who agree or have updated positively, but this would be surprising to me.I think this is a very poor way to make this estimate for most reasonable interpretations of “people changed their minds
One uncharitable interpretation is that this is a way of implying a consensus where it doesn’t exist, and conflating “good epistemics” with “people who agree with me”. (“75% of people agree with us! I’m so grateful that EA epistemics are trustworthy”). Doing this may create some social pressure to conform both to the majority and to people who apparently have “good epistemics”, especially given this claim came alongside the link to the EA Forum post on your FB post, and your call for action at the bottom including voting behavior. This is subtle and not necessarily what you intended, but I thought worth pointing out because the effects may exist regardless of your intentions.
On the uncharitable case:
I think there are other examples in the post that seem reasonable at first glance but can be interpreted or misinterpreted as similar cases of creating some kind of social pressure to take the Nonlinear position. Some of these are are raised in Yarrow’s comment.
Others include:
“However, if Ben pulled a Geoffrey Hinton and was able to update based on new information despite massive psychological pressure against that, that would be an act of impressive epistemic virtue. As a community, we want to make it so that people are rewarded for doing the right but hard thing, and this is one of those times.”
“EA’s high trust culture, part of what makes it great, is crumbling, and “sharing only negative information about X person/charity” posts will destroy it.
“EA since FTX has trauma. We’re infected by a cancer of distrust, suspicion, and paranoia. Frequent witch burnings. Seeing ill-intent everywhere. Forbidden questions (in EA!) Forbidden thoughts (in EA!)
We’re attacking each other instead of attacking the world’s problems.”
Most of the rest of the section titled “So how do we learn from this to make our community better? How can we make EA antifragile?”)
“This doesn’t mean EA is rife with abuse, it just means that EA is rife with humans. Humans with strong moral emotions and poor social skills on average. We should expect a lot of conflict. We need to find a better way to deal with this. Our community has been turning on itself with increasing ferocity, and we need to find a better way to recover from FTX. Let’s do what EA does best: optimize dispassionately, embody scout mindset, and interpret people charitably.”
On the charitable case:
I think it’s fairly obvious that using post upvotes is a poor way of indicating support for the Nonlinear position, because there are a lot of reasons for upvotes (or downvotes) that are unrelated to whether voters agree or disagree with the post itself.
Skimming some comments quickly (moved to footnote for ease of reading).[1]
There are obviously problems with aggregating votes which make these hard to interpret, but even if you take a looser definition, like “75% of readers now have a better net impression of Nonlinear than after Ben Pace’s post”, this still feels very unclear to me without cherry picking comments. I’m not expecting NL to have attempted to modelling consensus with agreevotes, but I think it’s clear even on skimming that opinions here are mixed (this doesn’t discount the possibility of multiple NL staff agree/disagreevoting many of these posts or comments), and ceteris paribus make it more surprising that the 75% claim was made.
Yarrow’s comment
“Even if most of what Kat says is factually true, this post still gives me really bad vibes and makes me think poorly of Nonlinear.”
has 68 agreevotes and 24 disagreevotes.
Lukas’ comment:
“I updated significantly in the direction of “Nonlinear leadership has a better case for themselves than I initially thought”,
“it seems likely to me that the initial post indeed was somewhat careless with fact-checking.”,
“I’m still confused about some of the fact-checking claims”, “I still find Chloe’s broad perspective credible and concerning (in a “this is difficult work environment with definite potential for toxicity” rather than “this is outright abusive on all reasonable definitions of the word”). The replies by Nonlinear leadership didn’t change my initial opinion here by too much”
has 34 agree-votes and 4 disagreevotes.
Ollie’s comment:
I don’t have time to engage with all the evidence here, but even if I came away convinced that all of the original claims provided by Ben weren’t backed up, I still feel really uneasy about Nonlinear; uneasy about your work culture, uneasy about how you communicate and argue, and alarmed at how forcefully you attack people who criticise you.
has 78 agreevotes and 31 disagreevotes
Muireall’s comment/spot check:
From my perspective, this is between “not responsive to the complaint” and “evidence for the spirit of the complaint”. It seems an overreach to call “They told me not to spend time with my boyfriend...” a “sad, unbelievable lie” “discrediting [Chloe] as a reliable source of truth” when it is not something anyone has cited Chloe as saying. It seems incorrect to describe “advised not to spend time with ‘low value people’” as in “direct contradiction” with any of this, which instead seems to affirm that traveling with Nonlinear was conditioned on “high potential” or being among the “highest quality people”. Finally, having initially considered inviting Chloe’s boyfriend to travel with them would still be entirely consistent with later deciding not to; encouraging a visit in May would still be consistent with an overall expectation that Chloe not spend too much time with her boyfriend in general for reasons related to his perceived “quality”.
has 20 agreevotes and 3 disagreevotes
Geoffrey’s comment:
Whatever people think about this particular reply by Nonlinear, I hope it’s clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.
has 53 agreevotes and 11 disagreevotes
Vipulnaik’s comment:
“For the most part, an initial reading of this post and the linked documents did have the intended effect on me of making me view many of the original claims as likely false or significantly exaggerated. But my own take is that the post would have been stronger had these changes been made prior to publishing. Curious to hear if others agree or disagree.”
has 24 agreevotes and 2 disagreevotes
Peter’s comment:
“Personally, I have updated back to being relatively unconcerned about bad behaviour at Nonlinear”
has 9 agreevotes and 15 disagreevotes
Kerry’s comment:
“to the main charges raised by Ben, this seems about as close to exonerating as one can reasonably expect to get in such cases”
has 30 agreevotes and 26 disagreevotes
Marcus’ comment:
“Overall, I think Nonlinear looks pretty good here. I definitely think they made some mistakes, especially adding members to their work+travel arrangements, but on the whole, I think they acted pretty reasonably and were unjustly vilified.”
has 14 agreevotes and 13 disagreevotes
John’s comment:
I think the preliminary takeaway is that non-linear are largely innocent, but really bad at appearing that way. They derailed their own exoneration via a series of bizarre editorials, which do nothing but distract, borne out of (seemingly) righteous indignation
has 12 agreevotes and 13 disagreevotes
Or “agreed with Nonlinear before this post and still agrees now”. Kat’s math assumes that literally everyone agreed with Ben’s post until now.
“This was posted here without my permission”
It was a public post an hour ago.