Zooming out from this particular case, I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong. If we don’t want to have strict professional norms I think it’s unfair to put all the blame on failed experiments without updating the algorithm that allows people embark on these experiments with community approval.
To be perfectly clear, I think this community has poor professional boundaries and a poor understanding of why normie boundaries exist. I would like better boundaries all around. I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering. If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, “adopt a hits-based approach”, “if you’re never failing then you’re playing it too safe”, etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.
From my perspective, none of this is particularly relevant to what bothers me about Ben’s post and Nonlinear’s response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and “I really wish EAs would experiment more with coercing and threatening each other” is not an example of the kind of experimentalism I’m talking about when I say that EAs should be willing to try and fail at more things (!).
“Keep EA weird” does not entail “have low ethical standards”. Weirdness is not an excuse for genuinely unethical conduct.
I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering.
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering. I agree that not all of the failures in Ben’s OP are necessarily related to any character/integrity issues, and I generally like the lens you’re recommending for most cases; I just don’t think it’s the right lens here.
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without warning Alice, that Alice was a bad employee for everyone’s information? Would Alice be bad if she tried to get them to stop?
My read on Alice’s situation was that she got into this hellish set of poor boundaries and low autonomy where she felt like a dependent servant on these people while traveling from country to country. I would have hated it, I already know. I would have hated having to fight my employer about not having to drive illegally in a foreign country. I am sure she was not wrong to hate it, but I don’t know if that’s the fault of Nonlinear except that maybe they should have predicted that was bad engineering that no one would like. Some people might have liked that situation, and it does seem valuable to be able to have unconventional arrangements.
Alice did not threaten to ruin Nonlinear’s reputation, she went ahead and shared her impressions of Nonlinear with people. If Nonlinear responded by sharing their honest opinions about Alice with people, that would be fine. In fact, they should have been doing this from the start, regardless of Alice’s actions. Instead they tried to suppress information by threatening to ruin her career. Notice how their threat reveals their dishonesty. Either Alice is a bad employee and they were painting her in a falsely positive light before, or she is a good employee and they threatened to paint her in a falsely negative light.
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation.
I think it’s totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., “I’ll wash the dishes so my roommate doesn’t think I’m a slob”, or “I’ll tweet about my latest paper because I’m proud of it and I want people to see what I accomplished”) that are just straightforwardly great.
I don’t think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., “I like food, so I’ll overeat” or “I like good TV shows, so I’ll stay up too late binging this one”); you can resist the temptation without stigmatizing the underlying human value.
In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with “Nonlinear wants to protect its reputation”.
Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, “I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I’ll leak true information that causes you great harm.”
On the face of it, this is more justifiable than “threatening Alice in order to protect my org’s reputation”. Hypothetical-Kat would be fighting for what’s true, on a topic of broad interest where she doesn’t stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.
Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA’s epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.
(We can further suppose that Hypothetical-Emerson lies in the email ‘this is a totally open-and-shut case, if this went to trial you would definitely lose’, in a further attempt to intimidate and pressure Ben. Because I’m pretty danged sure that’s what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I’m usually reluctant to accuse people of lying, but that just seems to be what happened here?)
Again, I’d say that this Hypothetical-Emerson (in spite of the “purer” motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.
Good points. I admit, I’m thinking more about whether it’s justifiable to punish that behavior than about whether it’s good or bad. It makes me super nervous to feel that the stakes are so high on what feels like it could be a mistake (or any given instance of which could be a mistake), which maybe makes me worse at looking at the object level offense.
I’d be happy to talk with you way more about rationalists’ integrity fastidiousness, since (a) I’d expect this to feel less scary if you have a clearer picture of rats’ norms, and (b) talking about it would give you a chance to talk me out of those norms (which I’d then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said “it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation”, “That’s a huge rationalist no-no, to try to protect a narrative”, and “or to try to affect what another person says about you”. But none of those three things are actually rat norms AFAIK, so it’s possible you’re missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.
Also, I’m opposed to cancel culture (as I understand the term). As far as I’m concerned, the worst person in the world deserves friends and happiness, and I’d consider it really creepy if someone said “you’re an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc.” It should be possible to warn people about bad behavior without that level of overreach into people’s personal lives.
(I expect others to disagree with me about some of this, so I don’t want “I’d consider it really creepy if someone did X” to shut down discussion here; feel free to argue to the contrary if you disagree! But I’m guessing that a lot of what’s scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of “which thing can I get attacked for?”. So I wanted to speak to the general dynamic.)
Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
To add sources to recent examples that come to mind that broadly support MHR’s point above RE: visible (ex post) failures that don’t seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here “harshly punish” seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinear’s failures relate to integrity / engineering, I think I’m sympathetic to both Rob’s view:
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering.
As well as Holly’s:
If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
but do not think these are necessarily mutually exclusive. Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Rob’s point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of ‘appropriate experimentation that the community would approve of’, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBF’s part as foreshadowing of the fall of FTX (which I don’t think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe I’m just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.
“I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.”
I strongly agree with this.
I think EA fails to recognise that traditional professional boundaries are a safeguard against tail risks and that these tail risks still remain when people appear to be kind / altruistic / rational.
Even though I don’t think EA needs to totally replicate outside norms, I do agree that there are good reasons why quite a few norms exist.
I’d say the biggest norms from outside that EA needs to adopt are less porous boundaries on work/dating, and importantly actually having normalish pay structures/work environments.
EDIT: After Ben’s comment I changed ‘raking in profits’ to ‘making money’. I do think this proposal is relevant for the conversation since the low pay, bad work environment and worsening mental health are a big part of the problem described in the post.
Concerns about “bosses raking in profits” seem pretty weird to raise in a thread about a nonprofit, in a community largely comprised of nonprofits. There might be something in your proposal in general, but it doesn’t seem relevant here.
Zooming out from this particular case, I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong. If we don’t want to have strict professional norms I think it’s unfair to put all the blame on failed experiments without updating the algorithm that allows people embark on these experiments with community approval.
To be perfectly clear, I think this community has poor professional boundaries and a poor understanding of why normie boundaries exist. I would like better boundaries all around. I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering. If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, “adopt a hits-based approach”, “if you’re never failing then you’re playing it too safe”, etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.
From my perspective, none of this is particularly relevant to what bothers me about Ben’s post and Nonlinear’s response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and “I really wish EAs would experiment more with coercing and threatening each other” is not an example of the kind of experimentalism I’m talking about when I say that EAs should be willing to try and fail at more things (!).
“Keep EA weird” does not entail “have low ethical standards”. Weirdness is not an excuse for genuinely unethical conduct.
I think the failures that seem like the biggest deal to me (Nonlinear threatening people and trying to shut down criticism and frighten people) genuinely are matters of character and lack of integrity, not matters of bad engineering. I agree that not all of the failures in Ben’s OP are necessarily related to any character/integrity issues, and I generally like the lens you’re recommending for most cases; I just don’t think it’s the right lens here.
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without warning Alice, that Alice was a bad employee for everyone’s information? Would Alice be bad if she tried to get them to stop?
My read on Alice’s situation was that she got into this hellish set of poor boundaries and low autonomy where she felt like a dependent servant on these people while traveling from country to country. I would have hated it, I already know. I would have hated having to fight my employer about not having to drive illegally in a foreign country. I am sure she was not wrong to hate it, but I don’t know if that’s the fault of Nonlinear except that maybe they should have predicted that was bad engineering that no one would like. Some people might have liked that situation, and it does seem valuable to be able to have unconventional arrangements.
EDIT: Sorry, it was Chloe with the driving thing.
Alice did not threaten to ruin Nonlinear’s reputation, she went ahead and shared her impressions of Nonlinear with people. If Nonlinear responded by sharing their honest opinions about Alice with people, that would be fine. In fact, they should have been doing this from the start, regardless of Alice’s actions. Instead they tried to suppress information by threatening to ruin her career. Notice how their threat reveals their dishonesty. Either Alice is a bad employee and they were painting her in a falsely positive light before, or she is a good employee and they threatened to paint her in a falsely negative light.
I think it’s totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., “I’ll wash the dishes so my roommate doesn’t think I’m a slob”, or “I’ll tweet about my latest paper because I’m proud of it and I want people to see what I accomplished”) that are just straightforwardly great.
I don’t think caring about your reputation is an inherently bad or corrupting thing. It can tempt you to do bad things, but lots of healthy and normal goals pose temptation risks (e.g., “I like food, so I’ll overeat” or “I like good TV shows, so I’ll stay up too late binging this one”); you can resist the temptation without stigmatizing the underlying human value.
In this case, I think the bad behavior by Nonlinear also would have been bad if it had nothing to do with “Nonlinear wants to protect its reputation”.
Like, suppose Alice honestly believed that malaria nets are useless for preventing malaria, and Alice was going around Berkeley spreading this (false) information. Kat sends Alice a text message saying, in effect, “I have lots of power over you, and dirt I could share to destroy you if you go against me. I demand that you stop telling others your beliefs about malaria nets, or I’ll leak true information that causes you great harm.”
On the face of it, this is more justifiable than “threatening Alice in order to protect my org’s reputation”. Hypothetical-Kat would be fighting for what’s true, on a topic of broad interest where she doesn’t stand to personally benefit. Yet I claim this would be a terrible text message to send, and a community where this was normalized would be enormously more toxic than the actual EA community is today.
Likewise, suppose Ben was planning to write a terrible, poorly-researched blog post called Malaria Nets Are Useless for Preventing Malaria. Out of pure altruistic compassion for the victims of malaria, and a concern for EA’s epistemics and understanding of reality, Hypothetical-Emerson digs up a law that superficially sounds like it forbids Ben writing the post, and he sends Ben an email threatening to take Ben to court and financially ruin him if he releases the post.
(We can further suppose that Hypothetical-Emerson lies in the email ‘this is a totally open-and-shut case, if this went to trial you would definitely lose’, in a further attempt to intimidate and pressure Ben. Because I’m pretty danged sure that’s what happened in real life; I would be amazed if Actual-Emerson actually believes the things he said about this being an open-and-shut libel case. I’m usually reluctant to accuse people of lying, but that just seems to be what happened here?)
Again, I’d say that this Hypothetical-Emerson (in spite of the “purer” motives) would be doing something thoroughly unethical by sending such an email, and a community where people routinely responded to good-faith factual disagreements with threatening emails, frivolous lawsuits, and lies, would be vastly more toxic and broken than the actual EA community is today.
Good points. I admit, I’m thinking more about whether it’s justifiable to punish that behavior than about whether it’s good or bad. It makes me super nervous to feel that the stakes are so high on what feels like it could be a mistake (or any given instance of which could be a mistake), which maybe makes me worse at looking at the object level offense.
I’d be happy to talk with you way more about rationalists’ integrity fastidiousness, since (a) I’d expect this to feel less scary if you have a clearer picture of rats’ norms, and (b) talking about it would give you a chance to talk me out of those norms (which I’d then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said “it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation”, “That’s a huge rationalist no-no, to try to protect a narrative”, and “or to try to affect what another person says about you”. But none of those three things are actually rat norms AFAIK, so it’s possible you’re missing some model that would at least help it feel more predictable what rats will get mad about, even if you still disagree with their priorities.
Also, I’m opposed to cancel culture (as I understand the term). As far as I’m concerned, the worst person in the world deserves friends and happiness, and I’d consider it really creepy if someone said “you’re an EA, so you should stop being friends with Emerson and Kat, never invite them to parties you host or discussion groups you run, etc.” It should be possible to warn people about bad behavior without that level of overreach into people’s personal lives.
(I expect others to disagree with me about some of this, so I don’t want “I’d consider it really creepy if someone did X” to shut down discussion here; feel free to argue to the contrary if you disagree! But I’m guessing that a lot of what’s scary here is the cancel-culture / horns-effect / scapegoating social dynamic, rather than the specifics of “which thing can I get attacked for?”. So I wanted to speak to the general dynamic.)
Can you give examples of EAs harshly punishing visible failures that weren’t matters of genuine unethical conduct? I can think of some pretty big visible failures that didn’t lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn’t work and terminating it, or GiveDirectly’s recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
To add sources to recent examples that come to mind that broadly support MHR’s point above RE: visible (ex post) failures that don’t seem to be harshly punished, (most seem somewhere between neutral to supportive, at least publicly).
Lightcone
Alvea
ALERT
AI Safety Support
EA hub
No Lean Season
Some failures that came with a larger proportion of critical feedback probably include the Carrick Flynn campaign (1, 2, 3), but even here “harshly punish” seems like an overstatement. HLI also comes to mind (and despite highly critical commentary in earlier posts, I think the highly positive response to this specific post is telling).
============
On the extent to which Nonlinear’s failures relate to integrity / engineering, I think I’m sympathetic to both Rob’s view:
As well as Holly’s:
but do not think these are necessarily mutually exclusive.
Specifically, it sounds like Rob is mainly thinking about the source of the concerns, and Holly is thinking about what to do going forwards. And it might be the case that the most helpful actionable steps going forward are things that look more like improving boundaries and systems, regardless of whether you believe failures specific to Nonlinear are caused by deficiencies in integrity or engineering.
That said, I agree with Rob’s point that the most significant allegations raised about Nonlinear quite clearly do not fit the category of ‘appropriate experimentation that the community would approve of’, under almost all reasonable perspectives.
I was thinking of murkier cases like the cancelation of Leverage and people taking small infractions on SBF’s part as foreshadowing of the fall of FTX (which I don’t think was enough of an indications), but admittedly those all involve parties that are guilty of something. Maybe I’m just trying too hard to be fair or treat people the way I want to be treated when I make a mistake.
“I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.”
I strongly agree with this.
I think EA fails to recognise that traditional professional boundaries are a safeguard against tail risks and that these tail risks still remain when people appear to be kind / altruistic / rational.
Even though I don’t think EA needs to totally replicate outside norms, I do agree that there are good reasons why quite a few norms exist.
I’d say the biggest norms from outside that EA needs to adopt are less porous boundaries on work/dating, and importantly actually having normalish pay structures/work environments.
I agree about the bad engineering. Apart from boundary norms we might also want to consider making our organizations more democratic. This kind of power abuse is a lot harder when power is more equally distributed among the workers. Bosses making money while paying employees nothing or very little occurs everywhere, but co-ops tend to have a lot less inequality within firms. They also create higher job satisfaction, life satisfaction and social trust. Furthermore, research has shown that employees getting more ownership of the company is associated with higher perception of fairness, information sharing and cooperation. It’s no wonder then that co-ops have a lower turnover rate.
EDIT: After Ben’s comment I changed ‘raking in profits’ to ‘making money’. I do think this proposal is relevant for the conversation since the low pay, bad work environment and worsening mental health are a big part of the problem described in the post.
Concerns about “bosses raking in profits” seem pretty weird to raise in a thread about a nonprofit, in a community largely comprised of nonprofits. There might be something in your proposal in general, but it doesn’t seem relevant here.