True. We should make sure any particular safeguard wasn’t in place around how people advocated for it before assuming it would have helped though. For what it’s worth my sense is that a much more culpable thing was not blowing the whistle on Sam’s bad behaviour at early Alameda even after Will and other leaders-I forget exactly who, if it’s even known-were informed about it. That mistake was almost certainly far less consequential for the people harmed by FTX (I don’t think it would have stopped the fraud; it might have protected EA itself), but I strongly suspect it was more knowably wrong at the time than anything anyone did or said about EtG as a general idea.
I think there are two separate but somewhat intertwined chains of inquiry under discussion here:
A historical inquiry: what happened in this case, what safeguards failed, what would have helped but wasn’t in place?
A ~first-principles re-evaluation of EtG based on an update: The catastrophic failure of the supposedly most successful instance of EtG should update us that we underestimated the risk and severity of EtG downsides. That suggests a broader re-examination of potential risks and safeguards, which may look more appropriate than they did before the update.
By analogy, whenever there is a school shooting, I don’t think it is inappropriate to analyze and discuss things merely because they would not have prevented that specific school shooting. However, those doing so should be careful to avoiding claiming that their preferred intervention would have been effective in the case in the news.
True. We should make sure any particular safeguard wasn’t in place around how people advocated for it before assuming it would have helped though. For what it’s worth my sense is that a much more culpable thing was not blowing the whistle on Sam’s bad behaviour at early Alameda even after Will and other leaders-I forget exactly who, if it’s even known-were informed about it. That mistake was almost certainly far less consequential for the people harmed by FTX (I don’t think it would have stopped the fraud; it might have protected EA itself), but I strongly suspect it was more knowably wrong at the time than anything anyone did or said about EtG as a general idea.
I think there are two separate but somewhat intertwined chains of inquiry under discussion here:
A historical inquiry: what happened in this case, what safeguards failed, what would have helped but wasn’t in place?
A ~first-principles re-evaluation of EtG based on an update: The catastrophic failure of the supposedly most successful instance of EtG should update us that we underestimated the risk and severity of EtG downsides. That suggests a broader re-examination of potential risks and safeguards, which may look more appropriate than they did before the update.
By analogy, whenever there is a school shooting, I don’t think it is inappropriate to analyze and discuss things merely because they would not have prevented that specific school shooting. However, those doing so should be careful to avoiding claiming that their preferred intervention would have been effective in the case in the news.
Agreed