I think trying to figure out the common thread “explaining datapoints like FTX, Leverage Research, [and] the LaSota crew” won’t yield much of worth because those three things aren’t especially similar to each other, either in their internal workings or in their external effects. “World-scale financial crime,” “cause a nervous breakdown in your employee,” and “stab your landlord with a sword” aren’t similar to each other and I don’t get why you’d expect to find a common cause. “All happy families are alike; each unhappy family is unhappy in its own way.”
There’s a separate question of why EAs and rationalists tolerate weirdos, which is more fruitful. But an answer there is also gonna have to explain why they welcome controversial figures like Peter Singer or Eliezer Yudkowsky, and why extremely ideological group houses like early Toby Ord’s [EDIT: Nope, false] or more recently the Karnofsky/Amodei household exercise such strong intellectual influence in ways that mainstream society wouldn’t accept. And frankly if you took away the tolerance for weirdos there wouldn’t be much left of either movement.
I think trying to figure out the common thread “explaining datapoints like FTX, Leverage Research, [and] the LaSota crew” won’t yield much of worth because those three things aren’t especially similar to each other, either in their internal workings or in their external effects.
There are many disparate factors between different cases. The particulars of each incident really matter lest people draw the wrong conclusions. However I think figuring out the common threads insofar as there are any is what we need, as otherwise we will overindex on particular cases and learn things that don’t generalize. I have meditated on what the commonalities could be and think they at least share what one may call “Perverse Maximization”, which I intend to apply to deontology (i.e. the sword stabbing) as well as utilitarianism. Maybe there’s a better word than “maximize”.
I think I discovered a shared commonality between the sort of overconfidence these extremist positions hold and the underconfidence that EAs who are overwhelmed by analysis paralysis hold: a sort of ‘inability to terminate closure-seeking process’. In the case of overconfidents, it’s “I have found the right thing and I just need to zealously keep applying it over and over”. In the case of underconfidents, it’s “this choice isn’t right or perfect, I need to find something better, no I need something better,I must find the right thing”. Both share an intolerance of ambiguity, uncertainty, and shades of gray. The latter is just more people suffering quiet misery of their perfectionism or being the ones manipulated than it ending up in any kind of outward explosion.
I think trying to figure out the common thread “explaining datapoints like FTX, Leverage Research, [and] the LaSota crew” won’t yield much of worth because those three things aren’t especially similar to each other, either in their internal workings or in their external effects. “World-scale financial crime,” “cause a nervous breakdown in your employee,” and “stab your landlord with a sword” aren’t similar to each other and I don’t get why you’d expect to find a common cause. “All happy families are alike; each unhappy family is unhappy in its own way.”
There’s a separate question of why EAs and rationalists tolerate weirdos, which is more fruitful. But an answer there is also gonna have to explain why they welcome controversial figures like Peter Singer or Eliezer Yudkowsky, and why extremely ideological group houses like
early Toby Ord’s[EDIT: Nope, false] or more recently the Karnofsky/Amodei household exercise such strong intellectual influence in ways that mainstream society wouldn’t accept. And frankly if you took away the tolerance for weirdos there wouldn’t be much left of either movement.You haven’t got your facts straight. I have never lived in a group house, let alone an extremely ideological one.
Huh! Retracted. I’m sorry.
There are many disparate factors between different cases. The particulars of each incident really matter lest people draw the wrong conclusions. However I think figuring out the common threads insofar as there are any is what we need, as otherwise we will overindex on particular cases and learn things that don’t generalize. I have meditated on what the commonalities could be and think they at least share what one may call “Perverse Maximization”, which I intend to apply to deontology (i.e. the sword stabbing) as well as utilitarianism. Maybe there’s a better word than “maximize”.
I think I discovered a shared commonality between the sort of overconfidence these extremist positions hold and the underconfidence that EAs who are overwhelmed by analysis paralysis hold: a sort of ‘inability to terminate closure-seeking process’. In the case of overconfidents, it’s “I have found the right thing and I just need to zealously keep applying it over and over”. In the case of underconfidents, it’s “this choice isn’t right or perfect, I need to find something better, no I need something better, I must find the right thing”. Both share an intolerance of ambiguity, uncertainty, and shades of gray. The latter is just more people suffering quiet misery of their perfectionism or being the ones manipulated than it ending up in any kind of outward explosion.