Ben—thanks for this helpful information. It adds useful context to some of the FTX news.
One clarification question: in your Background section 5c, you suggested that ‘“EA leaders”… did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse’
I agree that EA leaders might, in hindsight, have done a better job of distancing the EA movement from FTX and SBF, to protect EA’s public reputation. However, I’m not sure how much leverage EA leaders could have had in preventing or delaying FTX’s collapse.
If EA leaders had privately challenged Sam’s bad accounting, fraudulent behavior, etc, back before fall2022, would he really have listened and behaved any differently? Would other FTX leaders or employees have behaved differently?
If a few EAs had come out as public whistleblowers questioning FTX’s legitimacy, would any VCs, crypto influencers, crypto investors, major FTX depositors, or regulators have paid any attention? (Bearing in mind all crypto exchanges, protocols, and companies are subject to a relentless barrage of strategic or tactical ‘fear, uncertainty, & doubt’ (FUD) from rival organizations, short-sellers, ‘mainstream’ (anti-crypto) financial journalism, and ‘legacy’ (anti-crypto) financial institutions.)
These are honest questions; I really don’t know the answers, and I’d value any comments from people with more insider knowledge than I have.
I want to emphasise that Background section 5 is the OP saying, “The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to … “EA leaders” … did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse”.
In other words, I don’t think Ben is “suggesting” EA leaders are partly responsible. I think Ben is saying “I think TIME is claiming they are? Well, here’s my response...”
(But I’m glad you ask these questions; the apparent prevalence and persistence of hindsight bias in the EA movement today has been one of the biggest updates for me in recent months. I wondered if it might be because EA had generally been selecting for something like ‘smart’ but not ‘rationalist,’ but I’m not sure that the rationalists have fared much better and I think people outside both communities do tend to fare better in relation to EA events. My latest theory is just that I’d underestimated the allure of gossip, public shaming, witch hunts etc, and how easy it is to stir things up in an online world. Maybe I should read some of your work—could be a grounding counterweight to the lofty rationalist and altruistic ideals I have for myself and this community!)
Ubuntu—yes, regarding the underestimated ‘allure of gossip, public shaming, witch hunts, etc’, I think the moral psychology at work in these things runs so deep that even the most rationalist & clever EAs can be prone to them—and then we can sometimes deceive ourselves about what’s really going on.
However, the moral psychology around public shaming evolved for some good adaptive reasons, to help deter bad actors, solve coordination problems, enforce social norms, virtue-signal our values, internalize self-control heuristics, etc. So I don’t think we should dismiss them entirely. (My 2019 book ‘Virtue Signaling’ addresses some of these issues.)
Indeed, leveraging the power of these ‘darker’ facets of moral psychology (e.g. public shaming) has arguably been crucial in many effective moral crusades throughout history, e.g. against torture, slavery, sexism, racism, nuclear brinksmanship, chemical weapons, etc. They may still prove useful in fighting against AI X-risk...
I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I’ll call “shame” to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking.
To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get.
By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don’t pan out often end up with shame that may not be morally justified.
Ubuntu—thanks for the correction; you’re right; I misread that section as reflecting Ben’s views, rather than as his steel-manning of TIME’s views. Oops.
So, please take my reply as a critique of TIME’s view, rather than as a critique of Ben’s view.
In a universe where EA leaders had a sufficiently high index of suspicion, they could have at least started publicly distancing themselves from SBF and done one of two things: (1) stop working with FTXFF or encouraging people to apply, and/or (2) obtain “insurance” against fraudlent collapse by enlisting some megadonors who privately agreed in advance to immediately commit to repay all monies paid out to EA-aligned grantees if fraud ended up being discovered that inflicted relevant losses.
Public whistleblowing would likely have been terrible . . .if the evidence were strong enough (which I really doubt it was) then it should have been communicated to the US Department of Justice or another appropriate government agency.
Ben—thanks for this helpful information. It adds useful context to some of the FTX news.
One clarification question: in your Background section 5c, you suggested that ‘“EA leaders”… did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse’
I agree that EA leaders might, in hindsight, have done a better job of distancing the EA movement from FTX and SBF, to protect EA’s public reputation. However, I’m not sure how much leverage EA leaders could have had in preventing or delaying FTX’s collapse.
If EA leaders had privately challenged Sam’s bad accounting, fraudulent behavior, etc, back before fall2022, would he really have listened and behaved any differently? Would other FTX leaders or employees have behaved differently?
If a few EAs had come out as public whistleblowers questioning FTX’s legitimacy, would any VCs, crypto influencers, crypto investors, major FTX depositors, or regulators have paid any attention? (Bearing in mind all crypto exchanges, protocols, and companies are subject to a relentless barrage of strategic or tactical ‘fear, uncertainty, & doubt’ (FUD) from rival organizations, short-sellers, ‘mainstream’ (anti-crypto) financial journalism, and ‘legacy’ (anti-crypto) financial institutions.)
These are honest questions; I really don’t know the answers, and I’d value any comments from people with more insider knowledge than I have.
I want to emphasise that Background section 5 is the OP saying, “The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to … “EA leaders” … did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse”.
In other words, I don’t think Ben is “suggesting” EA leaders are partly responsible. I think Ben is saying “I think TIME is claiming they are? Well, here’s my response...”
(But I’m glad you ask these questions; the apparent prevalence and persistence of hindsight bias in the EA movement today has been one of the biggest updates for me in recent months. I wondered if it might be because EA had generally been selecting for something like ‘smart’ but not ‘rationalist,’ but I’m not sure that the rationalists have fared much better and I think people outside both communities do tend to fare better in relation to EA events. My latest theory is just that I’d underestimated the allure of gossip, public shaming, witch hunts etc, and how easy it is to stir things up in an online world. Maybe I should read some of your work—could be a grounding counterweight to the lofty rationalist and altruistic ideals I have for myself and this community!)
Ubuntu—yes, regarding the underestimated ‘allure of gossip, public shaming, witch hunts, etc’, I think the moral psychology at work in these things runs so deep that even the most rationalist & clever EAs can be prone to them—and then we can sometimes deceive ourselves about what’s really going on.
However, the moral psychology around public shaming evolved for some good adaptive reasons, to help deter bad actors, solve coordination problems, enforce social norms, virtue-signal our values, internalize self-control heuristics, etc. So I don’t think we should dismiss them entirely. (My 2019 book ‘Virtue Signaling’ addresses some of these issues.)
Indeed, leveraging the power of these ‘darker’ facets of moral psychology (e.g. public shaming) has arguably been crucial in many effective moral crusades throughout history, e.g. against torture, slavery, sexism, racism, nuclear brinksmanship, chemical weapons, etc. They may still prove useful in fighting against AI X-risk...
I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I’ll call “shame” to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking.
To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get.
By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don’t pan out often end up with shame that may not be morally justified.
Yeah that’s a fair point to raise. I guess I’m just lamenting that these facets aren’t refined enough to catch less false positives by this point.
Ubuntu—thanks for the correction; you’re right; I misread that section as reflecting Ben’s views, rather than as his steel-manning of TIME’s views. Oops.
So, please take my reply as a critique of TIME’s view, rather than as a critique of Ben’s view.
In a universe where EA leaders had a sufficiently high index of suspicion, they could have at least started publicly distancing themselves from SBF and done one of two things: (1) stop working with FTXFF or encouraging people to apply, and/or (2) obtain “insurance” against fraudlent collapse by enlisting some megadonors who privately agreed in advance to immediately commit to repay all monies paid out to EA-aligned grantees if fraud ended up being discovered that inflicted relevant losses.
Public whistleblowing would likely have been terrible . . .if the evidence were strong enough (which I really doubt it was) then it should have been communicated to the US Department of Justice or another appropriate government agency.