Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without warning Alice, that Alice was a bad employee for everyone’s information? Would Alice be bad if she tried to get them to stop?
My read on Alice’s situation was that she got into this hellish set of poor boundaries and low autonomy where she felt like a dependent servant on these people while traveling from country to country. I would have hated it, I already know. I would have hated having to fight my employer about not having to drive illegally in a foreign country. I am sure she was not wrong to hate it, but I don’t know if that’s the fault of Nonlinear except that maybe they should have predicted that was bad engineering that no one would like. Some people might have liked that situation, and it does seem valuable to be able to have unconventional arrangements.
Alice did not threaten to ruin Nonlinear’s reputation, she went ahead and shared her impressions of Nonlinear with people. If Nonlinear responded by sharing their honest opinions about Alice with people, that would be fine. In fact, they should have been doing this from the start, regardless of Alice’s actions. Instead they tried to suppress information by threatening to ruin her career. Notice how their threat reveals their dishonesty. Either Alice is a bad employee and they were painting her in a falsely positive light before, or she is a good employee and they threatened to paint her in a falsely negative light.
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation.
I think it’s totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., “I’ll wash the dishes so my roommate doesn’t think I’m a slob”, or “I’ll tweet about my latest paper because I’m proud of it and I want people to see what I accomplished”) that are just straightforwardly great.
I don’t think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., “I like food, so I’ll overeat” or “I like good TV shows, so I’ll stay up too late binging this one”); you can resist the temptation without stigmatizing the underlying human value.
In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with “Nonlinear wants to protect its reputation”.
Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, “I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I’ll leak true information that causes you great harm.”
On the face of it, this is more justifiable than “threatening Alice in order to protect my org’s reputation”. Hypothetical-Kat would be fighting for what’s true, on a topic of broad interest where she doesn’t stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.
Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA’s epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.
(We can further suppose that Hypothetical-Emerson lies in the email ‘this is a totally open-and-shut case, if this went to trial you would definitely lose’, in a further attempt to intimidate and pressure Ben. Because I’m pretty danged sure that’s what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I’m usually reluctant to accuse people of lying, but that just seems to be what happened here?)
Again, I’d say that this Hypothetical-Emerson (in spite of the “purer” motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.
Good points. I admit, I’m thinking more about whether it’s justifiable to punish that behavior than about whether it’s good or bad. It makes me super nervous to feel that the stakes are so high on what feels like it could be a mistake (or any given instance of which could be a mistake), which maybe makes me worse at looking at the object level offense.
I’d be happy to talk with you way more about rationalists’ integrity fastidiousness, since (a) I’d expect this to feel less scary if you have a clearer picture of rats’ norms, and (b) talking about it would give you a chance to talk me out of those norms (which I’d then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said “it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation”, “That’s a huge rationalist no-no, to try to protect a narrative”, and “or to try to affect what another person says about you”. But none of those three things are actually rat norms AFAIK, so it’s possible you’re missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.
Also, I’m opposed to cancel culture (as I understand the term). As far as I’m concerned, the worst person in the world deserves friends and happiness, and I’d consider it really creepy if someone said “you’re an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc.” It should be possible to warn people about bad behavior without that level of overreach into people’s personal lives.
(I expect others to disagree with me about some of this, so I don’t want “I’d consider it really creepy if someone did X” to shut down discussion here; feel free to argue to the contrary if you disagree! But I’m guessing that a lot of what’s scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of “which thing can I get attacked for?”. So I wanted to speak to the general dynamic.)
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without warning Alice, that Alice was a bad employee for everyone’s information? Would Alice be bad if she tried to get them to stop?
My read on Alice’s situation was that she got into this hellish set of poor boundaries and low autonomy where she felt like a dependent servant on these people while traveling from country to country. I would have hated it, I already know. I would have hated having to fight my employer about not having to drive illegally in a foreign country. I am sure she was not wrong to hate it, but I don’t know if that’s the fault of Nonlinear except that maybe they should have predicted that was bad engineering that no one would like. Some people might have liked that situation, and it does seem valuable to be able to have unconventional arrangements.
EDIT: Sorry, it was Chloe with the driving thing.
Alice did not threaten to ruin Nonlinear’s reputation, she went ahead and shared her impressions of Nonlinear with people. If Nonlinear responded by sharing their honest opinions about Alice with people, that would be fine. In fact, they should have been doing this from the start, regardless of Alice’s actions. Instead they tried to suppress information by threatening to ruin her career. Notice how their threat reveals their dishonesty. Either Alice is a bad employee and they were painting her in a falsely positive light before, or she is a good employee and they threatened to paint her in a falsely negative light.
I think it’s totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., “I’ll wash the dishes so my roommate doesn’t think I’m a slob”, or “I’ll tweet about my latest paper because I’m proud of it and I want people to see what I accomplished”) that are just straightforwardly great.
I don’t think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., “I like food, so I’ll overeat” or “I like good TV shows, so I’ll stay up too late binging this one”); you can resist the temptation without stigmatizing the underlying human value.
In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with “Nonlinear wants to protect its reputation”.
Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, “I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I’ll leak true information that causes you great harm.”
On the face of it, this is more justifiable than “threatening Alice in order to protect my org’s reputation”. Hypothetical-Kat would be fighting for what’s true, on a topic of broad interest where she doesn’t stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.
Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA’s epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.
(We can further suppose that Hypothetical-Emerson lies in the email ‘this is a totally open-and-shut case, if this went to trial you would definitely lose’, in a further attempt to intimidate and pressure Ben. Because I’m pretty danged sure that’s what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I’m usually reluctant to accuse people of lying, but that just seems to be what happened here?)
Again, I’d say that this Hypothetical-Emerson (in spite of the “purer” motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.
Good points. I admit, I’m thinking more about whether it’s justifiable to punish that behavior than about whether it’s good or bad. It makes me super nervous to feel that the stakes are so high on what feels like it could be a mistake (or any given instance of which could be a mistake), which maybe makes me worse at looking at the object level offense.
I’d be happy to talk with you way more about rationalists’ integrity fastidiousness, since (a) I’d expect this to feel less scary if you have a clearer picture of rats’ norms, and (b) talking about it would give you a chance to talk me out of those norms (which I’d then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said “it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation”, “That’s a huge rationalist no-no, to try to protect a narrative”, and “or to try to affect what another person says about you”. But none of those three things are actually rat norms AFAIK, so it’s possible you’re missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.
Also, I’m opposed to cancel culture (as I understand the term). As far as I’m concerned, the worst person in the world deserves friends and happiness, and I’d consider it really creepy if someone said “you’re an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc.” It should be possible to warn people about bad behavior without that level of overreach into people’s personal lives.
(I expect others to disagree with me about some of this, so I don’t want “I’d consider it really creepy if someone did X” to shut down discussion here; feel free to argue to the contrary if you disagree! But I’m guessing that a lot of what’s scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of “which thing can I get attacked for?”. So I wanted to speak to the general dynamic.)