I notice I’m confused by what Anders says about the offence-defence balance.
The argument, as I understand it, is that in the far future there’ll be a lot of space—lightyears, perhaps—between warring factions/civilizations. Offensive attacks therefore won’t work well because, with all the distance the offensive weapons need to cover, the defenders will have plenty of time to block or move out of the way.
But… this relies on the defenders seeing the weapons approaching, no? And I would expect weapons of the far future to travel at or very close to the speed of light,[1] making it impossible to see them coming until they’ve already hit you. (Which would mean that the balance favours offence, not defence.)
This seems like a basic enough point, though, that I’m sure it’s part of Anders’ thinking already; I expect I’m missing something.
- ^
e.g., high-powered lasers, other types of directed-energy weapons, projectiles accelerated via thermonuclear reaction, pion drive, or artificial black hole
I don’t think that observing lots of condemnation and little support is all that much evidence for the premise you take as given—that SBF’s actions were near-universally condemned by the EA community—compared to meaningfully different hypotheses like “50% of EAs condemned SBF’s actions.”
There was, and still is, a strong incentive to hide any opinion other than condemnation (e.g., support, genuine uncertainty) over SBF’s fraud-for-good ideology, out of legitimate fear of becoming a witch-hunt victim. By the law of prevalence, I therefore expect the number of EAs who don’t fully condemn SBF’s actions to be far greater than the number who publicly express opinions other than full condemnation.
(Note: I’m focusing on the morality of SBF’s actions, and not on executional incompetence.)
Anecdotally, of the EAs I’ve spoken to about the FTX collapse with whom I’m close—and who therefore have less incentive to hide what they truly believe from me—I’d say that between a third and a half fall into the genuinely uncertain camp (on the moral question of fraud for good causes), while the number in the support camp is small but not zero.[1]
And of those in my sample in the condemn camp, by far the most commonly-cited reason is timeless decision theory / pre-committing to cooperative actions, which I don’t think is the kind of reason one jumps to when one hears that EAs condemn fraud for good-type thinking.