A genuine belief in the risks of rapid development of artiycial general intelligence might well have been in part responsible for his high-risk approach to building wealth. The possibility of near-term global disaster, whether from AI or pandemics, may have represented a real imperative to grow disposable funds as quickly as possible in order to combat these immediate dangers.
I really hope this wasn’t a factor (I don’t think it was—Sam’s megalomania, seeming borderline sociopathy, and pride—stealing customer funds rather than let Alameda fail—seem much more likely to be prominent causes of the approach he took, which went far beyond “high-risk” into outright criminality). But if it was, it was going against everything Yudkowsky—arguably the biggest and most doomiest proponent of AGI x-risk—hassaid!
Ironically, if AI timelines really are short (or biorisk great), the FTX crisis has likely significantly increased existential risk with its setting back of the reputation of EA and Longtermism. As a proponent for urgent action on AGI x-risk, I am saddened by the association with SBF/FTX. And the fact remains that we need to address the risk.
I really hope this wasn’t a factor (I don’t think it was—Sam’s megalomania, seeming borderline sociopathy, and pride—stealing customer funds rather than let Alameda fail—seem much more likely to be prominent causes of the approach he took, which went far beyond “high-risk” into outright criminality). But if it was, it was going against everything Yudkowsky—arguably the biggest and most doomiest proponent of AGI x-risk—has said!
Ironically, if AI timelines really are short (or biorisk great), the FTX crisis has likely significantly increased existential risk with its setting back of the reputation of EA and Longtermism. As a proponent for urgent action on AGI x-risk, I am saddened by the association with SBF/FTX. And the fact remains that we need to address the risk.
Also, there’s this link on April 1, 2022, several months before, warning against this type of thinking. Go to Q4 for the answer.
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
Thanks, had forgotten about that. Added.