What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation. It’s not really clear to me that this is true. The crux of your argument seems to come from this paragraph:
The community was trusting—in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.
Would this have been any different if EA consisted of an archipelago of affiliated groups? If anything, Whistleblowing is easier in a large group since you have a network of folks you can contain to raise the alarm. Without a global EA group, who exactly do the ex-Alameda folks complain to? I guess they could talk to a journalist or something, but “trading firm CEO is kind of an amoral dick” isn’t really newsworthy (I’d say that’s probably the default assumption).
I also generally disagree that making EA more low trust is a good idea. It’s pretty well established that low trust societies have more crime and corruption than high trust societies. In that sense, making EA more low trust seems counterproductive to prevent SBF v2.0. In a low trust society, trust is typically reserved for your immediate community. This has obvious problems though! Making trust community-based (i.e. only trusting people in my immediate EA community) seems worse than making trust idea-based (i.e. trusting anyone that espouses shared EA values). People are more likely to defend bad actors if they consider them to be part of their in-group.
To be honest, I’d recommend the exact opposite course of action: make EA even more high trust. High trust societies succeed by binding members to a common consensus on ethics and morality. EAs need to be clearer about our expectations are with regard to ethics. It was apparently not clear to SBF that being a part of the EA community means adherence to a set of norms outside of naive utilitarian calculus. The EA community should emphatically state our norms and expectations. The corollary to that is that members that break the rules must be called-out and potentially even banished from the group.
“What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation.”
1. The community is unhealthy in various ways. 2. You’re suggesting centralizing around high trust, without a mechanism to build that trust.
I don’t think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn’t expect a random person most of us had never heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn’t say it loudly because they feared community censure. That’s a big problem.
It’s also hard to call people out when a lot of you are applying to him/them for funding, and are mostly focused on trying to explain how great and deserving your project is.
One good principle here is “be picky about your funders”. Smaller amounts from steady, responsible, principled and competent funders, who both do and submit to due diligence, are better than large amounts from others.
This doesn’t mean you HAVE to agree with their politics or everything they say in public—it’s more about having proper governance in place, and funders being separate from boards and boards being separate from executive, so that undue influence and conflicts of interest don’t arise, and decisions are made objectively, for the good of the project and the stated goals, not to please an individual funder or get kudos from EAs.
I’ve written more about donor due diligence in the main thread, with links.
Yes, economists chose to use the term ‘trust’ but I think a better term for what they are really discussing is ‘trustworthyness’; I suspect they made the substitution for optics reasons.
In agreement with the first part of this comment at least. If there were EA causes but not an EA community, it seems like much the same thing would have happened. A bunch of causes SBF thought were good would have gotten offered money, probably would have accepted the money, and then wound up accidentally laundering his reputation for being charitable while facing the prospect that some of the money they got was ill-gotten, and some of the money they had planned on getting wasn’t going to come. Maybe SBF wouldn’t have made his money to begin with? I find it unlikely, ideas like earning to give and ends-justifies-means naive consequentialism and high-risk strategies for making more money are all ideas that people associate with EA, but which don’t appeal to anything like a “community”. This isn’t to say none of these points are important aside from SBF, but well, it’s just odd to see them get so much attention because of him. Similar points have been made in Democratizing Risk, and in a somewhat different way in the recent pre-collapse Clearer Thinking interview with Michael Nielson and Ajeya Cotra. Maybe it’s still worth framing this in terms of SBF if now is an unusually good chance to make major movement changes, but at the same time I find it a little iffy. It seems misleading to frame this in terms of SBF if SBF didn’t actually provide us with good reasons to update in this direction, and it feels a bit perverse to use such a difficult time to promote an unrelated hobbyhorse, as a more recent post harped on (I think a bit too much, but I have some sympathy for it).
Agree with your post and want to add one thing. Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn’t really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism.
I think back to the old Scott Alexander post about the rationalist community: Yes, We Have Noticed The Skulls | Slate Star Codex. I think he makes a valid point, that the rationalist community has tried to address the obvious failure modes of rationalism. This is also true of the EA community, in that there has absolutely been some criticism of galaxy brained naive utilitarianism. However, there is a certain defensiveness in Scott’s post, an annoyance that people keep bringing up past failure modes even though rationalists try really hard to not fail that way again. I suspect this same defensiveness may have played a role in EA culture. Utilitarianism has always been criticized for the potential that it could be used to justify...well, SBF-style behavior. EAs can argue that we have newer and better formulations of utilitarianism / moral theory that don’t run into that problem, and this is true (in theory). However, I do suspect that this topic was undervalued in the EA community, simply because we were super annoyed at critics that keep harping on the risks of naive utilitarianism even though clearly no real EA actually endorses naive utilitarianism.
Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn’t really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism.
So I disagree with this because:
It’s unclear whether it’s right to attribute SBF’s choices to a failure of EA ideas. Following SBF’s interview with Kelsey Piper and based on other things I’ve been reading, I don’t think we can be sure at this point whether SBF was generally more motivated by naive utilitarianism or by seeking to expand his own power and influence. And it’s unclear which of those headspaces led him to the decision to defraud FTX customers.
It’s plausible there actually were serious ways that the EA community failed with respect to SBF. According to acoupleaccounts, at least several people in the community had reason to believe SBF was dishonest and sketchy. Some of them spoke up about it and others didn’t. The accounts say that these concerns were shared with more central leaders in EA who didn’t take a lot of action based on that information (e.g. they could have stopped promoting Sam as a shining example of an EA after learning of reports that he was dishonest, even if they continued to accept funding from him). [1]
If this story is true (don’t know for sure yet), then that would likely point to community failures in the sense that EA had a fairly centralized network of community/funding that was vulnerable, and it failed to distance itself from a known or suspected bad actor. This is pretty close to the OP’s point about the EA community being high-trust and so far not developing sufficient mechanisms to verify that trust as it has scaled.
--
[1]: I do want to clarify that in addition to this story still not being unconfirmed, I’m mostly not trying to place a ton of blame or hostility on EA leaders who may have made mistakes. Leadership is hard, the situation sounds hard and I think EA leaders have done a lot of good things outside of this situation. What we find out may reduce how much responsibility I think the EA movement should put with those people, but overall I’m much more interested in looking at systemic problems/solutions than fixating on the blame of individuals.
What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation. It’s not really clear to me that this is true. The crux of your argument seems to come from this paragraph:
Would this have been any different if EA consisted of an archipelago of affiliated groups? If anything, Whistleblowing is easier in a large group since you have a network of folks you can contain to raise the alarm. Without a global EA group, who exactly do the ex-Alameda folks complain to? I guess they could talk to a journalist or something, but “trading firm CEO is kind of an amoral dick” isn’t really newsworthy (I’d say that’s probably the default assumption).
I also generally disagree that making EA more low trust is a good idea. It’s pretty well established that low trust societies have more crime and corruption than high trust societies. In that sense, making EA more low trust seems counterproductive to prevent SBF v2.0. In a low trust society, trust is typically reserved for your immediate community. This has obvious problems though! Making trust community-based (i.e. only trusting people in my immediate EA community) seems worse than making trust idea-based (i.e. trusting anyone that espouses shared EA values). People are more likely to defend bad actors if they consider them to be part of their in-group.
To be honest, I’d recommend the exact opposite course of action: make EA even more high trust. High trust societies succeed by binding members to a common consensus on ethics and morality. EAs need to be clearer about our expectations are with regard to ethics. It was apparently not clear to SBF that being a part of the EA community means adherence to a set of norms outside of naive utilitarian calculus. The EA community should emphatically state our norms and expectations. The corollary to that is that members that break the rules must be called-out and potentially even banished from the group.
“What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation.”
1. The community is unhealthy in various ways.
2. You’re suggesting centralizing around high trust, without a mechanism to build that trust.
I don’t think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn’t expect a random person most of us had never heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn’t say it loudly because they feared community censure. That’s a big problem.
It’s also hard to call people out when a lot of you are applying to him/them for funding, and are mostly focused on trying to explain how great and deserving your project is.
One good principle here is “be picky about your funders”. Smaller amounts from steady, responsible, principled and competent funders, who both do and submit to due diligence, are better than large amounts from others.
This doesn’t mean you HAVE to agree with their politics or everything they say in public—it’s more about having proper governance in place, and funders being separate from boards and boards being separate from executive, so that undue influence and conflicts of interest don’t arise, and decisions are made objectively, for the good of the project and the stated goals, not to please an individual funder or get kudos from EAs.
I’ve written more about donor due diligence in the main thread, with links.
FWIW, I’ve generally assumed that causality goes the other way, or a third factor causes both.
Yes, economists chose to use the term ‘trust’ but I think a better term for what they are really discussing is ‘trustworthyness’; I suspect they made the substitution for optics reasons.
In agreement with the first part of this comment at least. If there were EA causes but not an EA community, it seems like much the same thing would have happened. A bunch of causes SBF thought were good would have gotten offered money, probably would have accepted the money, and then wound up accidentally laundering his reputation for being charitable while facing the prospect that some of the money they got was ill-gotten, and some of the money they had planned on getting wasn’t going to come. Maybe SBF wouldn’t have made his money to begin with? I find it unlikely, ideas like earning to give and ends-justifies-means naive consequentialism and high-risk strategies for making more money are all ideas that people associate with EA, but which don’t appeal to anything like a “community”. This isn’t to say none of these points are important aside from SBF, but well, it’s just odd to see them get so much attention because of him. Similar points have been made in Democratizing Risk, and in a somewhat different way in the recent pre-collapse Clearer Thinking interview with Michael Nielson and Ajeya Cotra. Maybe it’s still worth framing this in terms of SBF if now is an unusually good chance to make major movement changes, but at the same time I find it a little iffy. It seems misleading to frame this in terms of SBF if SBF didn’t actually provide us with good reasons to update in this direction, and it feels a bit perverse to use such a difficult time to promote an unrelated hobbyhorse, as a more recent post harped on (I think a bit too much, but I have some sympathy for it).
Agree with your post and want to add one thing. Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn’t really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism.
I think back to the old Scott Alexander post about the rationalist community: Yes, We Have Noticed The Skulls | Slate Star Codex. I think he makes a valid point, that the rationalist community has tried to address the obvious failure modes of rationalism. This is also true of the EA community, in that there has absolutely been some criticism of galaxy brained naive utilitarianism. However, there is a certain defensiveness in Scott’s post, an annoyance that people keep bringing up past failure modes even though rationalists try really hard to not fail that way again. I suspect this same defensiveness may have played a role in EA culture. Utilitarianism has always been criticized for the potential that it could be used to justify...well, SBF-style behavior. EAs can argue that we have newer and better formulations of utilitarianism / moral theory that don’t run into that problem, and this is true (in theory). However, I do suspect that this topic was undervalued in the EA community, simply because we were super annoyed at critics that keep harping on the risks of naive utilitarianism even though clearly no real EA actually endorses naive utilitarianism.
So I disagree with this because:
It’s unclear whether it’s right to attribute SBF’s choices to a failure of EA ideas. Following SBF’s interview with Kelsey Piper and based on other things I’ve been reading, I don’t think we can be sure at this point whether SBF was generally more motivated by naive utilitarianism or by seeking to expand his own power and influence. And it’s unclear which of those headspaces led him to the decision to defraud FTX customers.
It’s plausible there actually were serious ways that the EA community failed with respect to SBF. According to a couple accounts, at least several people in the community had reason to believe SBF was dishonest and sketchy. Some of them spoke up about it and others didn’t. The accounts say that these concerns were shared with more central leaders in EA who didn’t take a lot of action based on that information (e.g. they could have stopped promoting Sam as a shining example of an EA after learning of reports that he was dishonest, even if they continued to accept funding from him). [1]
If this story is true (don’t know for sure yet), then that would likely point to community failures in the sense that EA had a fairly centralized network of community/funding that was vulnerable, and it failed to distance itself from a known or suspected bad actor. This is pretty close to the OP’s point about the EA community being high-trust and so far not developing sufficient mechanisms to verify that trust as it has scaled.
--
[1]: I do want to clarify that in addition to this story still not being unconfirmed, I’m mostly not trying to place a ton of blame or hostility on EA leaders who may have made mistakes. Leadership is hard, the situation sounds hard and I think EA leaders have done a lot of good things outside of this situation. What we find out may reduce how much responsibility I think the EA movement should put with those people, but overall I’m much more interested in looking at systemic problems/solutions than fixating on the blame of individuals.