Claude’s Summary:
Here are a few key points summarizing Will MacAskill’s thoughts on the FTX collapse and its impact on effective altruism (EA):
He believes Sam Bankman-Fried did not engage in a calculated, rational fraud motivated by EA principles or long-termist considerations. Rather, it seems to have stemmed from hubris, incompetence and failure to have proper risk controls as FTX rapidly grew.
The fraud and collapse has been hugely damaging to EA’s public perception and morale within the community. However, the core ideas of using reason and evidence to do the most good remain valid.
Leadership at major EA organizations has essentially turned over in the aftermath. Will has stepped back from governance roles to allow more decentralization.
He does not think the emphasis on long-termism within EA was a major driver of the FTX issues. If anything, near-term considerations like global health and poverty reduction could provide similar motivation for misguided risk-taking.
His views on long-termism have evolved to be more focused on short-term AI risk over cosmic timescales, given the potential for advanced AI systems to pose existential risks to the current generation within decades.
Overall, while hugely damaging, he sees the FTX scandal as distinct from the valid principles of effective altruism rather than undermining them entirely. But it has prompted substantial re-evaluation and restructuring within the movement.
I have written a bit about this (and related topics) in the past:
https://www.lesswrong.com/posts/5jA3Tvxh2jFcFBzqR/risk-premiums-vs-prediction-markets
I think you make a fairly good argument (in iv) about trying to maximise the probability of achieving outcome x where x could vary to being a small number, but I expect futarchy proponents would argue that you can fix this by returning E[outcome] rather than P(outcome > x). So society would vote to get the policy that maximises the expected outcome rather than the probability of an outcome. (Or you could look at P(outcome > x) for a range of x).
You wrote on reddit:
But I think none of your explanation here actually relies on this correlation. (And I think this is extremely important). I think risk-neutrality arguments are actually not the right framing. For example, a coin flip is a risky bet, but that doesn’t mean the price will be less than 1⁄2 because there’s a symmetry in whether or not you are bidding on heads or tails. It’s just more likely you don’t bet at all because if you are risk-neutral, you value H at 0.45 and T at 0.45.
The key difference is that if the coin flip is correlated to the real economy, such that the dollar-weighted average person would rather live in a world where heads come up than tails, they will pay more for tails than heads.