This situation reminded me of this post, EA’s weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice’s allegations (which I do), it’s hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don’t live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben’s summary of Nonlinear’s response (also the linked job ad):
“Our intention wasn’t just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.”
“We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people.”
“The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t”
“The semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this.”
I am reminded again that, while many professional norms are stupid, a lot of them exist for good reasons. Further, I think it’s often pretty easy to disentangle the stupid professional norms from the reasonable professional norms by just thinking: “Are there good reasons this norm exists?” (E.g., “Is there a reason employees and employers shouldn’t live together?” Yes: the power dynamics inherent to the employer/employee dynamic are at odds with healthy roommate dynamics, in which people generally shouldn’t have lots of power over one another. “Is there a reason I should have to wear high heels to work in an office?” …. no.) Trying to make employees part of your family unit, not negotiating financial details with your employees, covering your employees’ rent and groceries, and being in any way involved in your employees breaking the law are all behaviors that are at odds with standard professional practices, and there are very obviously good reasons for this.
Vulnerable EAs also want to follow only good norms while disposing of the bad ones!
If you offer people the heuristic “figure out if it’s reasonable and only obey it if it is” then often they will fail.
You mention clear-cut examples, but oftentimes they will be very grey, or they will seem grey while being inside them. There may be several strong arguments why the norm isn’t a good one; the bad actor will be earnest, apologetic, and trying to let you have your norm even though they don’t believe in it. They may seem like a nice reasonable person trying to do the right thing in an awkward situation.
Following every norm would be quite bad. Socially enforced gendered cosmetics are disgusting and polyamory is pretty nifty.
Nonetheless, we must recognize that the same process that produces “polyamory is pretty nifty” will also produce in many people: “there’s no reason I can’t have a friendly relationship with my employer rather than an adversarial one” (these are the words they will use to describe the situation while living in their employer’s house) and “I can date my boss if we are both ethical about it.”
We must not look down on these people as though we’d never fall for it—everyone has things they’d fall for, no matter how smart they are.
My suggestion is to outsource. Google your situation. Read reddit threads. Talk to friends, DM people who have the same job as you (and who you are certain have zero connection to your boss) - chances are they’ll be happy to talk to someone in the same position.
A few asides, noting that these are basics and noncomplete.
If someone uses the phrase “saving the world” on any level approaching consistent, run. Legitimate people who are working on legitimate problems do not rely on this drama. The more exciting the narrative and the more prominent a role the leader plays in it, the more skeptical you should be.
(Ah, you might say, but facts can’t be too good to be true: they are simply true or false. My answer to that would be the optimizer’s curse.)
If someone compares themselves to Professor Quirrell, run. In a few years, we’ll have enough abusers who identified with him to fill a scrapbook.
If there’s a dumb enough schmuck in EA to compare themselves to Galileo/da Vinci, exit calmly while giggling.
If someone is willing to break a social contract for utilitarian benefit, assume they’ll break other social contracts for personal benefit i.e. sex.
If you are a somewhat attractive woman with unusual epistemic rigor, assume people will try to take advantage of that.
If someone wants unusual investment from you in a relationship, outsource.
If they say they’re uncomfortable with how much you talk to other people, this must be treated as an attempt to subvert you.
Expect to hear “I have a principled objection to lying and am utterly scandalized whenever someone does it” many times, and be prepared to catch that person lying.
If someone pitches you on something that makes you uncomfortable, but for which you can’t figure out your exact objection—or if their argument seems wrong but you don’t see the precise hole in their logic—it is not abandoning your rationality to listen to your instinct.
If someone says “the reputational risks to EA of you publishing this outweigh the benefits of exposing x’s bad behavior. if there’s even a 1% chance that AI risk is real, then this could be a tremendously evil thing to do”, nod sagely then publish that they said that.
Those last two points need a full essay to be conveyed well but I strongly believe them and think they’re important.
If someone uses the phrase “saving the world” on any level approaching consistent, run.
I use this phrase a lot, so if you think this phrase is a red flag, well, include me on the list of people who have that flag.
If someone pitches you on something that makes you uncomfortable, but for which you can’t figure out your exact objection—or if their argument seems wrong but you don’t see the precise hole in their logic—it is not abandoning your rationality to listen to your instinct.
Agreed (here, and with most of your other points). Instincts like those can be wrong, but they can also be right. “Rationality” requires taking all of the data into consideration, including illegible hunches and intuitions.
If someone says “the reputational risks to EA of you publishing this outweigh the benefits of exposing x’s bad behavior. if there’s even a 1% chance that AI risk is real, then this could be a tremendously evil thing to do”, nod sagely then publish that they said that.
Yeah a quick search finds 10,000+ hits for comments about “saving the world”) on this forum, many of which are by me.
I do think the phrase is a bit childish and lacks some rigor, but I’m not sure what’s a good replacement. “This project can avert 10^-9 to 10^-5 dooms defined as unendorsed human extinction or worse at 80% resilience” just doesn’t quite have the same ring to it.
I do think the phrase is a bit childish and lacks some rigor
I think the phrase is imprecise, relative to phrases like “prevent human extinction” or “maximize the probability that the reachable universe ends up colonized by happy flourishing civilizations”. But most of those phrases are long-winded, and it often doesn’t matter in conversation exactly which version of “saving the world” you have in mind.
(Though it does matter, if you’re working on existential risk, that people know you’re being relatively literal and serious. A lot of people talk about “saving the planet” when the outcome they’re worried about is, e.g., a 10% loss in current biodiversity, rather than the destruction of all future value in the observable universe.)
If a phrase is useful and tracks reality well, then if it sounds “childish” that’s more a credit to children than a discredit to the phrase.
And I don’t know what “lacks some rigor” means here, unless it’s referring to the imprecision.
Mostly, I like “saves the world” because it owns my weird beliefs about the situation I think we’re in, and states it bluntly so others can easily understand my view and push back against it if they disagree.
Being in a situation where you think your professional network’s actions have a high chance of literally killing every human on the planet in the next 20 years, or of preventing this from happening, is a very unusual and fucked up situation to be in. I could use language that downplays how horrifying and absurd this all is, but that would be deceiving you about what I actually think. I’d rather be open about the belief, so it can actually be talked about.
If someone uses the phrase “saving the world” on any level approaching consistent, run. Legitimate people who are working on legitimate problems do not rely on this drama. The more exciting the narrative and the more prominent a role the leader plays in it, the more skeptical you should be.
(Ah, you might say, but facts can’t be too good to be true: they are simply true or false. My answer to that would be the optimizer’s curse.)
I don’t think the problem stems from how important an organization thinks their work is. Emerson’s meme company had no pretense to be world-saving, and yet had toxic dynamics as well.
The problem is that high stakes are not a reason to suspend ethical injunctions or personal boundaries; those provide more protective value when applied to something with genuinely high stakes.
not negotiating financial details with your employees, covering your employees’ rent and groceries
My impression is that it’s very normal for employees to expense food and living costs during business travel without any negotiation, and that there exist common jobs where free room and board are a part of the compensation (e.g. working at a resort or on an oil rig).
being in any way involved in your employees breaking the law
I think it’s fairly common for companies to ask their employees to break the law. (Often a bad thing, from society’s perspective. But common.) I was asked to do it multiple times a day at a previous job. (A good job, at a well-regarded company. I’m not sure they even knew they were breaking the law until I pointed it out. Eventually they changed their practices—possibly because it made very little difference to the bottom line.)
With regard to weirdness in general: The biggest mistakes I see the EA movement making—with harms I estimate as far larger than harms in the OP—are a result of insufficient weirdness, not excess weirdness. So I don’t like to discourage weirdness in a blanket sort of way.
It’s easy with the benefit of hindsight to point out a bunch of things which might have created a bad situation. What we really need is the ability to forecast the effects of individual norms in advance.
This situation reminded me of this post, EA’s weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice’s allegations (which I do), it’s hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don’t live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben’s summary of Nonlinear’s response (also the linked job ad):
“Our intention wasn’t just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.”
“We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people.”
“The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t”
“The semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this.”
I am reminded again that, while many professional norms are stupid, a lot of them exist for good reasons. Further, I think it’s often pretty easy to disentangle the stupid professional norms from the reasonable professional norms by just thinking: “Are there good reasons this norm exists?” (E.g., “Is there a reason employees and employers shouldn’t live together?” Yes: the power dynamics inherent to the employer/employee dynamic are at odds with healthy roommate dynamics, in which people generally shouldn’t have lots of power over one another. “Is there a reason I should have to wear high heels to work in an office?” …. no.) Trying to make employees part of your family unit, not negotiating financial details with your employees, covering your employees’ rent and groceries, and being in any way involved in your employees breaking the law are all behaviors that are at odds with standard professional practices, and there are very obviously good reasons for this.
Vulnerable EAs also want to follow only good norms while disposing of the bad ones!
If you offer people the heuristic “figure out if it’s reasonable and only obey it if it is” then often they will fail.
You mention clear-cut examples, but oftentimes they will be very grey, or they will seem grey while being inside them. There may be several strong arguments why the norm isn’t a good one; the bad actor will be earnest, apologetic, and trying to let you have your norm even though they don’t believe in it. They may seem like a nice reasonable person trying to do the right thing in an awkward situation.
Following every norm would be quite bad. Socially enforced gendered cosmetics are disgusting and polyamory is pretty nifty.
Nonetheless, we must recognize that the same process that produces “polyamory is pretty nifty” will also produce in many people: “there’s no reason I can’t have a friendly relationship with my employer rather than an adversarial one” (these are the words they will use to describe the situation while living in their employer’s house) and “I can date my boss if we are both ethical about it.”
We must not look down on these people as though we’d never fall for it—everyone has things they’d fall for, no matter how smart they are.
My suggestion is to outsource. Google your situation. Read reddit threads. Talk to friends, DM people who have the same job as you (and who you are certain have zero connection to your boss) - chances are they’ll be happy to talk to someone in the same position.
A few asides, noting that these are basics and noncomplete.
If someone uses the phrase “saving the world” on any level approaching consistent, run. Legitimate people who are working on legitimate problems do not rely on this drama. The more exciting the narrative and the more prominent a role the leader plays in it, the more skeptical you should be.
(Ah, you might say, but facts can’t be too good to be true: they are simply true or false. My answer to that would be the optimizer’s curse.)
If someone compares themselves to Professor Quirrell, run. In a few years, we’ll have enough abusers who identified with him to fill a scrapbook.
If there’s a dumb enough schmuck in EA to compare themselves to Galileo/da Vinci, exit calmly while giggling.
If someone is willing to break a social contract for utilitarian benefit, assume they’ll break other social contracts for personal benefit i.e. sex.
If you are a somewhat attractive woman with unusual epistemic rigor, assume people will try to take advantage of that.
If someone wants unusual investment from you in a relationship, outsource.
If they say they’re uncomfortable with how much you talk to other people, this must be treated as an attempt to subvert you.
Expect to hear “I have a principled objection to lying and am utterly scandalized whenever someone does it” many times, and be prepared to catch that person lying.
If someone pitches you on something that makes you uncomfortable, but for which you can’t figure out your exact objection—or if their argument seems wrong but you don’t see the precise hole in their logic—it is not abandoning your rationality to listen to your instinct.
If someone says “the reputational risks to EA of you publishing this outweigh the benefits of exposing x’s bad behavior. if there’s even a 1% chance that AI risk is real, then this could be a tremendously evil thing to do”, nod sagely then publish that they said that.
Those last two points need a full essay to be conveyed well but I strongly believe them and think they’re important.
I use this phrase a lot, so if you think this phrase is a red flag, well, include me on the list of people who have that flag.
Agreed (here, and with most of your other points). Instincts like those can be wrong, but they can also be right. “Rationality” requires taking all of the data into consideration, including illegible hunches and intuitions.
Agreed!
Yeah a quick search finds 10,000+ hits for comments about “saving the world”) on this forum, many of which are by me.
I do think the phrase is a bit childish and lacks some rigor, but I’m not sure what’s a good replacement. “This project can avert 10^-9 to 10^-5 dooms defined as unendorsed human extinction or worse at 80% resilience” just doesn’t quite have the same ring to it.
I think the phrase is imprecise, relative to phrases like “prevent human extinction” or “maximize the probability that the reachable universe ends up colonized by happy flourishing civilizations”. But most of those phrases are long-winded, and it often doesn’t matter in conversation exactly which version of “saving the world” you have in mind.
(Though it does matter, if you’re working on existential risk, that people know you’re being relatively literal and serious. A lot of people talk about “saving the planet” when the outcome they’re worried about is, e.g., a 10% loss in current biodiversity, rather than the destruction of all future value in the observable universe.)
If a phrase is useful and tracks reality well, then if it sounds “childish” that’s more a credit to children than a discredit to the phrase.
And I don’t know what “lacks some rigor” means here, unless it’s referring to the imprecision.
Mostly, I like “saves the world” because it owns my weird beliefs about the situation I think we’re in, and states it bluntly so others can easily understand my view and push back against it if they disagree.
Being in a situation where you think your professional network’s actions have a high chance of literally killing every human on the planet in the next 20 years, or of preventing this from happening, is a very unusual and fucked up situation to be in. I could use language that downplays how horrifying and absurd this all is, but that would be deceiving you about what I actually think. I’d rather be open about the belief, so it can actually be talked about.
I don’t think the problem stems from how important an organization thinks their work is. Emerson’s meme company had no pretense to be world-saving, and yet had toxic dynamics as well.
The problem is that high stakes are not a reason to suspend ethical injunctions or personal boundaries; those provide more protective value when applied to something with genuinely high stakes.
My impression is that it’s very normal for employees to expense food and living costs during business travel without any negotiation, and that there exist common jobs where free room and board are a part of the compensation (e.g. working at a resort or on an oil rig).
I think it’s fairly common for companies to ask their employees to break the law. (Often a bad thing, from society’s perspective. But common.) I was asked to do it multiple times a day at a previous job. (A good job, at a well-regarded company. I’m not sure they even knew they were breaking the law until I pointed it out. Eventually they changed their practices—possibly because it made very little difference to the bottom line.)
With regard to weirdness in general: The biggest mistakes I see the EA movement making—with harms I estimate as far larger than harms in the OP—are a result of insufficient weirdness, not excess weirdness. So I don’t like to discourage weirdness in a blanket sort of way.
It’s easy with the benefit of hindsight to point out a bunch of things which might have created a bad situation. What we really need is the ability to forecast the effects of individual norms in advance.