Epistemic status: I haven’t read much about this story and I don’t have a considered opinion about the allegations. My prior is that these things usually turn out to be true after more investigation, and the below was written from the perspective of “I assume that Gates did in fact behave inappropriately in some way.”
The link is paywalled to me, but I’m disappointed to see the news. (Though I’m happy to see that Bill and Melinda say they plan to continue the Gates Foundation’s work.)
This kind of incident often makes me think of this quote from Holden Karnofsky:
In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities; I wouldn’t skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work.
I think this is a very common position within EA — that we should behave ethically in “standard” ways and avoid using altruistic work to cover or excuse unethical behavior. (See this great comment from Julia Wise or “Everyday Longtermism” for more on that view.)
I don’t remember seeing anyone in the community condone someone’s unethical behavior on the basis of their impact (vs. contesting whether the behavior itself was unethical, as in debates over Peter Singer’s most controversial views). Are there any examples I’m missing?
*****
The story also makes me think of Thomas Pogge, who was involved in EA early on but doesn’t seem to have been involved after being accused of sexual harassment. I’d guess that wasn’t a coincidence, though I only know my own story: the Yale EA group, which I led at the time, dropped him as an advisor after this happened. (It never occurred to us to defend his behavior.)
This isn’t to say that EA should avoid future contact with Gates. But I don’t expect to see anyone say “it’s fine he did that stuff, because he saved so many lives”.
first, not condoning bill’s behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn’t find any of the arguments linked particularly convincing.
“In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it.”
Sure generally you shouldn’t be a jerk, but generally being kind isn’t mutually exclusive to achieving goals. Beyond that what does ‘overlay’ mean? The statement is quite vague, and I’m actually sure there is some bar of family event that he would skip. I’m sure 99%+ of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It’s potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure.
And again with julia’s statement, she’s just saying “Because we believe that trust, cooperation, and accurate information are essential to doing good”. Ok, that could be true but isn’t that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn’t supposed to be a steel counterargument but I’m just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don’t sexually harass people, but not under the grounds that “you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity.” However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.
*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).
So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied. The EA brand may not want to be associated with that essay though.
It feels a bit to me like EAs are often naturally not ‘standardly kind’ or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet).
This is a good comment! Upvoted for making a reasonable challenge to a point that often goes unchallenged.
There are trade-offs to honesty and cooperation, and sometimes those virtues won’t be worth the loss of impact or potential risk. I suspect that Holden!2013 would endorse this; he may come off as fairly absolutist here, but I think you could imagine scenarios where he would, in fact, miss a family event to accomplish some work-related objective (e.g. if a billion-dollar grant were at stake).
I don’t know how relevant this fact is to the Gates case, though.
While I don’t have the time to respond point-by-point, I’ll share some related thoughts:
My initial comment was meant to be descriptive rather than prescriptive: in my experience, most people in EA seem to be aligned with Holden’s view. Whether they should be is a different question.
I include myself in the list of those aligned, but like anyone, I have my own sense of what constitutes “standard”, and my own rules for when a trade-off is worthwhile or when I’ve hit the limit of “trying”.
Still, I think I ascribe a higher value than most people to “EA being an unusually kind and honest community, even outside its direct impact”.
I don’t understand what would result from an analysis of “what types of unethical behavior could be condoned”:
Whatever result someone comes up with, their view is unlikely to be widely adopted, even within EA (given differences in people’s ethical standards)
In cases where someone behaves unethically within the EA community, there are so many small details we’ll know about that trying to argue for any kind of general rule seems foolhardy. (Especially since “not condoning” can mean so many different things—whether someone is fired, whether they speak at a given event, whether a given org decides to fund them...)
In cases outside EA (e.g. that of Gates), the opinion of some random people in EA has effectively no impact.
All in all, I’d rather replace questions like “should we condone person/behavior X?” with “should this person X be invited to speak at a conference?” or “should an organization still take grant money from a person who did X?” Or, in a broader sense, “is it acceptable to lie in a situation like X if the likely impact is Y?”
Personal views, not speaking for/about CEA.
Epistemic status: I haven’t read much about this story and I don’t have a considered opinion about the allegations. My prior is that these things usually turn out to be true after more investigation, and the below was written from the perspective of “I assume that Gates did in fact behave inappropriately in some way.”
The link is paywalled to me, but I’m disappointed to see the news. (Though I’m happy to see that Bill and Melinda say they plan to continue the Gates Foundation’s work.)
This kind of incident often makes me think of this quote from Holden Karnofsky:
I think this is a very common position within EA — that we should behave ethically in “standard” ways and avoid using altruistic work to cover or excuse unethical behavior. (See this great comment from Julia Wise or “Everyday Longtermism” for more on that view.)
I don’t remember seeing anyone in the community condone someone’s unethical behavior on the basis of their impact (vs. contesting whether the behavior itself was unethical, as in debates over Peter Singer’s most controversial views). Are there any examples I’m missing?
*****
The story also makes me think of Thomas Pogge, who was involved in EA early on but doesn’t seem to have been involved after being accused of sexual harassment. I’d guess that wasn’t a coincidence, though I only know my own story: the Yale EA group, which I led at the time, dropped him as an advisor after this happened. (It never occurred to us to defend his behavior.)
This isn’t to say that EA should avoid future contact with Gates. But I don’t expect to see anyone say “it’s fine he did that stuff, because he saved so many lives”.
first, not condoning bill’s behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn’t find any of the arguments linked particularly convincing.
“In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it.”
Sure generally you shouldn’t be a jerk, but generally being kind isn’t mutually exclusive to achieving goals. Beyond that what does ‘overlay’ mean? The statement is quite vague, and I’m actually sure there is some bar of family event that he would skip. I’m sure 99%+ of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It’s potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure.
And again with julia’s statement, she’s just saying “Because we believe that trust, cooperation, and accurate information are essential to doing good”. Ok, that could be true but isn’t that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn’t supposed to be a steel counterargument but I’m just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don’t sexually harass people, but not under the grounds that “you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity.” However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.
*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).
So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied. The EA brand may not want to be associated with that essay though.
It feels a bit to me like EAs are often naturally not ‘standardly kind’ or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet).
This is a good comment! Upvoted for making a reasonable challenge to a point that often goes unchallenged.
There are trade-offs to honesty and cooperation, and sometimes those virtues won’t be worth the loss of impact or potential risk. I suspect that Holden!2013 would endorse this; he may come off as fairly absolutist here, but I think you could imagine scenarios where he would, in fact, miss a family event to accomplish some work-related objective (e.g. if a billion-dollar grant were at stake).
I don’t know how relevant this fact is to the Gates case, though.
While I don’t have the time to respond point-by-point, I’ll share some related thoughts:
My initial comment was meant to be descriptive rather than prescriptive: in my experience, most people in EA seem to be aligned with Holden’s view. Whether they should be is a different question.
I include myself in the list of those aligned, but like anyone, I have my own sense of what constitutes “standard”, and my own rules for when a trade-off is worthwhile or when I’ve hit the limit of “trying”.
Still, I think I ascribe a higher value than most people to “EA being an unusually kind and honest community, even outside its direct impact”.
I don’t understand what would result from an analysis of “what types of unethical behavior could be condoned”:
Whatever result someone comes up with, their view is unlikely to be widely adopted, even within EA (given differences in people’s ethical standards)
In cases where someone behaves unethically within the EA community, there are so many small details we’ll know about that trying to argue for any kind of general rule seems foolhardy. (Especially since “not condoning” can mean so many different things—whether someone is fired, whether they speak at a given event, whether a given org decides to fund them...)
In cases outside EA (e.g. that of Gates), the opinion of some random people in EA has effectively no impact.
All in all, I’d rather replace questions like “should we condone person/behavior X?” with “should this person X be invited to speak at a conference?” or “should an organization still take grant money from a person who did X?” Or, in a broader sense, “is it acceptable to lie in a situation like X if the likely impact is Y?”