Can I ask for a link to this ‘indirect post’? I’m interested in the generalized lessons being advertised here, but couldn’t find the post after looking on LW.
Ataftoti
I think the main thrust of the article is the speculation about “Customer C”, in this case, Alameda. That speculation being that FTX lended Alameda customer funds for its own token.
However, this also offers a way out, on the condition that Alameda didn’t lose whatever it got. In Matt Levine’s analogy, the funds are lost because “Customer C” ran away for selfish reasons, to the detriment of the bank. In this case, Alameda and FTX are both closely associated with Bankman-Fried.
If Alameda still has the assets gotten from FTX customers, it would then be possible to move that back to FTX customers. FTX and Alameda would still be finished in a business sense, but the damage to customers would be minimized.
Of course this is all just blind speculation on my part, and it doesn’t look good because Alameda probably already spent all the assets it got from FTX to pay off previous loans. If that is the case then we’re still in the same situation with customer funds lost.
- [deleted]
Perhaps one could just bite the bullet and vow to never work on known dangerous tech ever, even if a race is possible?
Maybe the risk of losing to a totalitarian regime due to lack of superweapon advantage is an acceptable cost of lowering FUCKING EXISTENTIAL RISK?
Maybe it’s the pinnacle of human hubris to think that your specific brand of politics is worth gambling the existence of human civilization over.
I mean, the idea that superweapons can alter a major war has never had historical evidence, at all. I honestly think that it’s mostly the pride of scientists fueling that fantasy. You can see this trope in modern fiction. Many times the protagonist in a war story will come up with a gadget or tactic and that leads to winning a war… but any military historian would tell you that such things are unlikely to make a difference in real life. Why do we have this trope? The power-fantasy of the individual! It’s just like the fantasy of being a superhero with superhuman powers, winning wars single-handedly, except dressed up in a futile attempt at realism!
The AI scenario is a bit different if you believe in FOOM, but in that case it’s more likely to be suicidal than a targeted weapon anyway so you should still disavow it.
If there is no FOOM and it progresses more like how Robin Hanson envisions it, then past experience shows us that there’s more to winning a war than simply having a new technology.
The obvious default conclusion here is that there is nothing inherent in his personality that made him more likely to do this, compared to other EAs.
Other EAs haven’t gambled billions of customer funds simply because they didn’t have billions of customer funds. If they had been in SBF’s situation they may have fallen to the temptation too.
Needless to say, I think what SBF did was unquestionably wrong and condemn it. I’m simply also pessimistic enough to think that I myself, and other regularly-adjusted-humans around me, would also fall to such temptations. Somehow, I really doubt that 100% of the people condemning SBF would have resisted the temptation if they were put into the same situation.