My updates after FTX

Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.

I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).

In the second half, I list some updates I haven’t made.

I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.

For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.

I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.

The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance /​ confidence /​ size of update. I’m sorry it’s become so long – the key takeaways are in bold.

I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.

Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).

The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.

I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.

List of updates

1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless /​ self-deluded /​ deceptive /​ norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)

2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).

3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively update on our ability to judge character and competence, especially the idea that it’s unusually good. My working assumption now would be that we’re about average. This update applies most to the people who knew SBF best, though I don’t expect many others to have done much better if their places were swapped. (More speculatively, it seems plausible to me that many EAs have worse judgement of character than average, because e.g. they project their good intentions onto others.)

4. Personally I think I also got biased by the halo effect, wanting EA to be “winning”, and my prior belief that EAs were unusually competent. It seems like others found it hard to criticise FTX because it’s hard to criticise your in-group and social group, especially if it might have implications for your career. A rule of thumb going forward: if someone who might feel ‘on your side’ appears to be doing unusually well, try to increase scrutiny rather than reduce it.

5. I’m more concerned about people on the far end of the “aggressive optimizing” style i.e. something like people who are (over)confident in a narrow inside view, and willing to act ambitiously on it, even if it breaks important norms. In contrast, I feel more into moderation of action, pluralism of views, humility, prudence and respect for cooperative norms. (I’m still unsure how to best frame all this.)

In particular, it’s important to bear in mind an “aggressive optimizing” personality combines badly with a radical worldview, because the worldview could encourage it or help rationalise it (which could include certain types of effective altruism, longtermism, rationalism, and utilitarianism among many other widespread views in society, like radical socialism, deep ecology, protest movements that don’t rule out violence etc.). It also combines badly with any tendency for self-delusion.

I think where to be on the contrarian action to moderation spectrum is a really difficult question, since some degree of unusual action is necessary if you’re serious about helping others. But it seemed like SBF was fairly extreme on this cluster, as are others who have engaged in worrying behaviour on a smaller scale. So my increased concern about dangerous people makes me more concerned about attracting or encouraging people with this style.

I’m not sure we want the median person in the community to moderate more. The key thing is to avoid attracting and supporting people on the far end of the spectrum, as I think current-EA probably does: EA is about optimization and rethinking ethics, so it wouldn’t be surprising if it attracted some extreme optimizers who are willing to question regular norms. I think this entails being more concerned about broadcasting radical or naively maximising views (e.g. expressing more humility in writing even if it will reach fewer people), and having a culture that’s more hostile to this style. For example, I think we should be less welcoming to proudly self-identified & gung-ho utilitarians, since they’re more likely to have these traits.

I feel very tempted to personally take a step towards moderation in my own worldview. This would make me a bit less confident in my most contrarian positions, including some EA ones.

(Note there is an epistemic component to this cluster (how much confidence someone has in their inside view), but the more important part is the willingness to act on these views, even if it violates other important norms. I’m keen for people to develop and experiment with their own inside views. But it’s quite possible to have radical inside views while being cautious in your actions.)

6. All the above makes me feel tempted to negatively update on the community’s epistemics across the board. You could reason that if you previously thought EA epistemics were unusually good, then we should have had a better than typical chance of spotting this, but actually ended up similarly deceived as professional investors, the media, the crypto community etc., so our epistemics were approximately no better than those other groups. This could imply negatively updating on all of EA’s contrarian positions, in proportion to how different they are from conventional wisdom. On the other hand, judgement of character & financial competence are pretty different from judgement about e.g. cause selection. It doesn’t make sense to me to, say, seriously downweight the warnings of a pandemic researcher about the chance of a pandemic because they didn’t spot the risk their partner was cheating on them. So, overall I don’t make a big update on the community’s judgement and epistemics when it comes to core EA ideas, though I feel pretty unsure about it.

What seems clearer is that we should be skeptical about the idea that EAs have better judgement about anything that’s not a key area of EA or personal expertise, and should use conventional wisdom, expert views or baserates for those (e.g. how to run an org; likelihood of making money; likelihood of fraud). A rule of thumb to keep in mind: “don’t defer to someone solely because they’re an EA.” Look for specific expertise or evaluate the arguments directly.

The previous four points together make me less confident in the current community’s ability to do the “practical project” of effective altruism, especially if you think it requires unusually high levels of operational competence, general judgement, wise action or internal trust. That could suggest focusing more on the intellectual project and hoping others pick up the ideas – I haven’t updated much on our ability to do the intellectual project of EA, and think a lot of progress can be made by applying ‘normal’ epistemics to neglected questions. Within the practical project, it would suggest focusing on areas with better feedback loops and lower downsides in order to build competence, and going slower in the meantime.

7. If you have concerns about someone, don’t expect that the presence of people you’re not concerned around them will prevent dangerous action, especially if that person seems unusually strong willed.

8. Governance seems more important. Since there are dangerous people around and we’re worse at judging who is who, we need better governance to prevent bad stuff. (And if someone with a lot of power is acting without governance, you should think there’s a non-negligible chance of wrongdoing at some point, even if you agree with them on object-level questions. This could also suggest EA orgs shouldn’t accept donations from organisations without sufficient governance.) This doesn’t need to be a ton of bureaucracy, which could slow down a lot of projects, but it does mean things like having basic accounting (to be clear most orgs have this already), and for larger organisations, striving to have a board who actually try to evaluate the CEO (appointing more junior people who have more headspace if that’s what’s needed). This is not easy, due to the reasons here, though overall I’d like to see more.

I feel more into creating better measures to collect anonymous concerns or whistleblower protection, though it’s worth noting that most EA orgs already have whistleblower protection (it’s legally required in the UK and US), and the community health team already has a form for collecting anonymous concerns (the SEC also provides whistleblower protection for fraud). Better whistleblower protection probably wouldn’t have uncovered what happened at FTX, but now seems like a good moment to beef up our systems in that area anyway.

9. Character matters more. Here you can think of ‘character’ minimally as someone’s habits of behaviour. By ‘character matters’ I mean a lot of different things, including:

  • People will tend to act in the ways they have in the past unless given very strong evidence otherwise (stronger than saying they’ve changed, doing a few things about it and some years going by).

  • Character virtues like honesty, integrity, humility, prudence, moderation & respect for cooperative norms are even more important than I thought (in order to constrain potentially dangerous behaviour, and to maintain trust, truth-tracking and reputation).

  • If you have small concerns about someone’s character, there are probably worse things you don’t yet know about.

  • Concerns with character become more significant when combined with high stakes, especially the chance of large losses, and lack of governance or regulation.

BUT it’s also harder to assess good character than I thought, and also harder to constrain dangerous actors via culture.

So I think the net effect is:

  • Put more weight on small concerns about character (e.g. if someone is willing to do a small sketchy thing, they’re probably going to be more sketchy when the stakes are higher). Be especially concerned about clear signs of significant dishonesty /​ integrity breaches /​ norm-breaking in someone’s past – probably just don’t work with someone if you find any. Also look out for recklessness, overconfidence, self-importance, and self-delusion as warning signs. If someone sounds reckless in how they talk, it might not just be bluster.

  • Be more willing to share small concerns about character with others, even though this increases negative gossip, or could reflect badly on the community.

  • Try to avoid plans and structures that rely on people being unusually strong in these character virtues, especially if they involve high stakes.

  • Do more to support and encourage people to develop important character virtues, like honesty and humility, to shift the culture in that direction, put off people without the character virtues we value, and uphold those virtues myself (even in the face of short-term costs) E.g. it seems plausible that some groups (e.g. 80k) should talk more about having an ethical life in general, rather than focusing mainly on consequences.

  • I should be more concerned about the character of people I affiliate with.

10. It’s even more important to emphasise not doing things that seem clearly wrong from a common sense perspective even if you think it could have a big positive impact. One difficulty of focusing on consequences is that it removes the bright lines around norms – any norm can be questioned, and a small violation doesn’t seem so bad because it’s small. Unfortunately in the face of self-delusion and huge stakes, humans probably need relatively simple norms to prevent bad behaviour. Framing these norms seems hard, since they need to be both simple enough to provide a bright line and sophisticated enough to apply to high-stakes, ethically complex & unintuitive situations, so I’d like to see more work to develop them. One that makes sense to me “don’t do something that seems clearly wrong from a common sense perspective even if you think it could have a big positive impact”. I think we could also aim to have brighter lines around honesty/​integrity, though I’m not sure how to precisely define them (though I find this and this helpful).

11. EA & longtermism are going to have controversial brands and be met with skepticism by default by many people in the media, Twitterati, public intellectuals etc. for some time. (This update isn’t only due to SBF.) This suggests that media based outreach is going to be less effective going forward. Note that so far it’s unclear EA’s perception among the general public has changed (most of them have still never heard of EA) – but the views of people in the media shape perception over the longer term. I think there’s a good chance perceptions continue to get worse as e.g. all the FTX TV shows come out, and future media coverage has a negative slant.

I’ve also updated back in favour of EA not being a great public facing brand, since it seems to have held up poorly in the face of its first major scandal.

This suggests that groups who want a ‘sensible’ or broadly appealing brand should disassociate more from EA & longtermism, while we accept they’re going to be weirder and niche for now. (In addition, EA may want to dissociate from some of its more controversial elements, either as a complementary or alternative strategy.)

12. The value of additional money to EA-supported object level causes has gone up by about 2x since the summer, and the funding bar has also gone up 2x. This means projects that seemed marginal previously shouldn’t get funded. Open Philanthropy estimates ~45% of recent longtermist grants wouldn’t clear their new bar. But it also means that marginal donations are higher-impact. (Why 2x? The amount of capital is down ~3x, but we also need to consider future donations.)

13. Forward-looking long-term cost-effectiveness of EA community building seems lower to me. (Note that I especially mean people-based community building efforts, not independent efforts to spread certain underlying ideas, related academic field building etc.) This is because:

  • My estimate of the fraction of dangerous actors attracted has gone up, and my confidence in our ability to spot them has gone down, making the community less valuable.

  • Similarly, if the EA community can’t be high-trust by default, and is less generally competent, its potential for impact is lower than I thought.

  • Tractability seems lower due to the worse brand, worse appeal and lower morale, and potential for negative feedback loops.

  • Past cost-effectiveness seems much lower.

  • More speculatively, as of January, I feel more pessimistic about the community’s ability to not tear itself apart in the face of scandal and setbacks.

  • I’m also more concerned that community is simply unpleasant for many people to be part of.

This would mean I think previously-borderline community growth efforts should be cut. And a higher funding bar would increase the extent of those cuts – it seems plausible 50%+ of efforts that were funded in 2022 should not be funded going forward. (Though certain efforts to improve culture could be more effective.) I don’t currently go as far as to think that building a community around the idea of doing good more effectively won’t work.

14. The value of actively affiliating with the current EA community seems lower. This is mainly due to the above: it seems less valuable & effective to grow the community; the brand costs of affiliation are higher, and I think it’s going to be less motivating to be part of. However, I’ve also updated towards the costs of sharing a brand being bigger than I thought. Recent events have created a ‘pile on’ in which not only are lots of people and organizations tarred by FTX, but also dug up many more issues, and signal-boosted critics. I didn’t anticipate how strong this dynamic would be.

Personally, this makes me more inclined to write about specific causes like AI alignment rather than EA itself, and/​or to write about broader ideas (e.g. having a satisfying career in general, rather than EA careers advice). It seems more plausible to me e.g. that 80k should work harder to foster its own identity & community, and focus more on sending people to cause specific communities (though continue to introduce people to EA), and it seems more attractive to do things like have a separate ‘effective charity’ movement.

On the other hand, the next year could be a moment of malleability where it’s possible to greatly improve EA. So, while I expect it makes sense for some groups to step back more, I hope others stay dedicated to making EA the best version of itself it can be.

15. I’m less into the vision of EA as an identity /​ subculture /​ community, especially one that occupies all parts of your life, and more into seeing it as a set of ideas & values you engage with as part of an otherwise normal life. Having EA as part of your identity and main social scene makes it harder to criticise people within it, and makes it more likely that you end up trusting or deferring to people just because they’re EAs. It leads to a weird situation where people feel like all actions taken by people in the community speak for them, when there is no decision-making power at the level of ‘the community’ (example). It complicates governance by mixing professional and social relationships. If the community is less valuable to build and affiliate with, the gains of strongly identifying with it are lower. Making EA into a subculture associates the ideas with a particular lifestyle /​ culture that is unappealing if not actively bad for many people; and at the very least, it associates the drama that’s inevitable within any social scene with the ideas. Making EA your whole life is an example of non-moderate action. It also makes it harder to moderate your views, making it more likely you end up naively optimizing. Overall, I don’t feel confident I should disavow EA as an identity (given that it already is one, it might be better to try to make it work better); but I’ve always been queasy about it, and recent events make me a lot more tempted.

In short this would mean trying to avoid thinking of yourself as “an EA”. More specifically, I see all of the following as (even) worse ideas than before, especially doing more than one at the same time: (i) having a professional and social network that’s mainly people very into EA (ii) taking on lots of other countercultural positions at the same time (iii) moving to EA hubs, especially the Bay Area and if you don’t have other connections in those places (iv) living in a group house (v) financially depending on EA e.g. not having skills that can be used elsewhere. It makes me more keen on efforts to have good discourse around the ideas, like The Precipice, and less into “community building”.

16. There’s a huge difference between which ideas are true and which ideas are good for the world to promote, in part because your ideas can be used and interpreted in very different ways from what you intend.

17. I feel unsure if EA should become either more or less centralised, and suspect some of both might make sense. For instance, having point people for various kinds of coordination & leadership seems more valuable than before (diffusion of responsibility has seemed like a problem, as has information flow); but, as covered, sharing the same brand and a uniting identity seems worse than before, so it seems more attractive to decentralise into looser knit collection of brands, cause specific scenes and organisations. The current situation where people feel like it’s a single body that speaks for & represents them, but where there’s no community-wide membership or control, seems pretty bad.

18. I should be less willing to affiliate with people in controversial industries, especially those with little governance or regulation or are in a big bull market.

19. I feel unsure how to update about promotion of earning to give. I’m inclined to think events don’t imply much about ‘moderate’ earning to give (e.g. being a software engineer and donating), and the relative value of donations has gone up. I’m more skeptical of promoting ‘ambitious’ earning to give (e.g. aiming to become a billionaire), because it’s a more contrarian position, more likely to attract dangerous maximisers, relies on a single person having a lot of influence; and now has a bad track record – even more so if it involves working in controversial industries.

20. I’m more skeptical of strategies that involve people in influential positions making hard calls at a crucial moment (e.g. in party politics, AI labs), because this relies on these people having good character, though I wouldn’t go as far as avoiding them all together.

21. I’d updated a bit towards the move fast and break things /​ super high ambition /​ chaotic philosophy of org management due to FTX (vs. prioritise carefully /​ be careful about overly rapid growth /​ follow best practice etc.), but I undo that update. (Making this update on one data point was probably also a mistake in the first place.)

22. When promoting EA, we should talk more about concrete impactful projects and useful ideas, and less about specific people (especially if they’re billionaires, in controversial industries or might be aggressive maximisers). I’d mostly made this update in the summer for other reasons, but it seems more vindicated now.

Updates I’m basically not making

A lot of the proposals I’ve read post-FTX seem like they wouldn’t have made much difference to whether something like FTX happened, or at least don’t get to the heart of it.

I also think many critiques focus far too much on the ideas as opposed to personality and normal human flaws. This is because the ideas are more novel & interesting, but I think they played second fiddle.

Here I’ve listed some things I mostly haven’t changed my mind about:

  • The core ideas of effective altruism still make sense to me. By ‘core ideas’ I mean both the underlying values (e.g. we should strive to help others, to prioritise more, be more impartial, think harder about what’s true) and core positions, like doing more about existential risk, helping the developing world and donating 10%.

    First (assuming the allegations are true) the FTX founders were violating the values of the community – both the value of collaborative spirit and by making decisions that were likely to do great damage rather than help people.

    Second, I think the crucial question is what led them to allegedly make such bad decisions in the first place, rather than how they (incorrectly) rationalised these decisions. To me, that seems more about personality (the ‘aggressive optimizing’ cluster), lack of governance/​regulation, and ordinary human weaknesses.


Third, the actions of individuals don’t tell us much about whether a philosophical idea like treating others more equally makes sense. It also doesn’t tell us anything much about whether e.g. GiveWell’s research is correct.

The main update I make is above: that some of the ideas of effective altruism, especially extreme versions of them, attract some dangerous people, and the current community isn’t able to spot and contain them. This makes the community as it exists now seem less valuable to me.

I’m more concerned that, while the ideas might be correct and important, they could be bad to promote in practice since they could help to rationalise bad behaviour. But overall I feel unsure how worried to be about this. Many ideas can be used to rationalise bad behaviour, so never spreading any ideas that could seems untenable. I also intend to keep practicing the ideas in my own life.

  • I’d make similar comments about longtermism, though it seems even less important to what happened, because FTX seems like it would have happened even without longtermism: SBF would have supported animal welfare instead. Likewise, there’s a stronger case for risk neutrality with respect to raising funding for GiveWell charities than longtermist ones, because the returns diminish much less sharply.

  • I’d also make similar comments about utilitarianism. Though, I think recent events provide stronger reasons to be concerned about building a community around utilitarianism than effective altruism, because SBF was a utilitarian before being an EA, and it’s a more radical idea.

  • EA should still use thought experiments & quantification. I agree with the critique that we should be more careful with taking the results literally and then confidently applying them. But I don’t agree we should think less about thought experiments or give up trying to quantify things – that’s where a lot of the value has been. I think the ‘aggressive optimizing’ personality gets closer to the heart of the problem.

  • I don’t think the problem was with common views on risk. This seems misplaced in a couple of ways. First, SBF’s stated views on risk were extreme within the community (I don’t know anyone else who would happily gamble on St Petersburg), so he wasn’t applying the community’s positions. Second, (assuming the allegations are true) since fraud was likely to be caught and have huge negative effects, it seems likely the FTX founders made a decision with negative expected value even by their own lights, rather than a long shot but positive-EV bet. So, the key issue is why they were so wrong, not centrally their attitudes to risk. And third there’s the alleged fraud, which was a major breach of integrity and honesty no matter the direct consequences. Though, I agree the (overly extreme) views on risk may have helped the FTX founders to rationalise a bad decision.

    I think the basic position that I’ve tried to advance in my writing on risk is still correct: if you’re a small actor relative to the causes you support, and not doing something that could set back the whole area or do a lot of harm, then you can be closer to risk-neutral than a selfish actor. Likewise, I still think it makes sense for young people who want to do good and have options to “aim high” i.e. try out paths that have a high chance of not going anywhere, but would be really good if they succeed.

  • The fact that EAs didn’t spot this risk to themselves, doesn’t mean we should ignore their worries about existential risk. I’ve heard this take a lot, but it seems like a weird leap. Recent events show that EAs are not better than e.g. the financial press at judging character and financial competence. This doesn’t tell you anything much about whether the arguments they make about existential risk are correct or not. (Like my earlier example of ignoring a pandemic expert’s warnings because they didn’t realise their partner was cheating on them.) These are in totally different categories, and only one is a claimed area of expertise. If anything, if a group who are super concerned about risks didn’t spot a risk, it should make us more concerned there are as yet unknown existential risks out there.

    I think the steelman of this critique involves arguing that bad judgement about e.g. finance is evidence for bad judgement in general, and so people should defer less to the more contrarian EA positions. I don’t personally put a lot of weight on this argument, but I feel unsure and discuss it in the first section. Either way, it’s better to engage with the arguments about existential risk on their merits.

    There’s another version of this critique that says that these events show that EAs are not capable of “managing” existential risks on behalf of humanity. I agree – it would be a terrible failure if EAs end up the only people working to reduce existential risk – we need orders of magnitude more people and major world governments working on it. This is why the main thrust of EA effort has, in my eyes, been to raise concern for these risks (e.g. The Precipice, WWOTF), do field building (e.g. in AI), or fund groups outside the community (e.g. in biosecurity).

  • I don’t see this as much additional evidence that EA should rely less on billionaires. It would clearly be better for there to be a more diversified group of funders supporting EA (I believed this before). The issue is what to do about it going forward. 1 billionaire ~= 5,000 Google SWE earning to give 30% of their income, and that’s more people than the entire community. So while we can and should take steps in this direction, it’s going to be hard to avoid a situation where most donations made according to EA principles are made by the wealthiest people interested in the ideas.

  • Similarly, I agree it would be better if we had more public faces of EA and should be doing more to get this (I also thought this before). This said, I don’t think it’s that easy to make progress on. I’m aware of several attempts to get more people to become faces of EA, but they’ve ended up not wanting to do it (which I sympathise with having witnessed recent events), and even if they wanted to do it, it’s unclear they could be successful enough to move the needle on the perceptions of EA.

  • It doesn’t seem like an update on the idea that billionaires have too much influence on cause prioritisation in effective altruism. I don’t think SBF had much influence on cause prioritisation, and the Future Fund mainly supported causes that were already seen as important. I agree SBF was having some influence on the culture of the community (e.g. towards more risk taking), which I attribute to the halo effect around his apparent material success. Billionaires can also of course have disproportionate influence on what it’s possible to get paid to work on, which sucks, but I don’t see a particular promising route to avoiding that.

  • I don’t see events as clear evidence that funding decisions should be more democratised. This seems like mainly a separate issue and if funding decisions had been more democratised, I don’t think it would have made much difference in preventing what happened. Indeed, the Future Fund was the strongest promoter of more decentralised funding. This said, I’d be happy (for other reasons) to see more experiments with decentralised philanthropy, alternative decision-making and information aggregation mechanisms within EA and in general.

  • I don’t see this as evidence that moral corruption by unethical industries is a bigger problem than we thought. I don’t think a narrative in which SBF was ‘corrupted’ by the crypto industry seems like a big driver to me – I think being corrupted by money/​power seems closer to the mark, and the lack of regulation in crypto was a problem.

  • I don’t take this as an update in favour of the rationality community over the EA community. I make mostly similar updates about that community, though with some differences.

  • I’m unconvinced that there should have been much more scenario /​ risk planning. I think it was already obvious that FTX might fall 90% in a crypto bear market (e.g. here) – and if that was all that happened, things would probably be OK. What surprised people was the alleged fraud and that everything was so entangled it would all go to zero at once, and I’m skeptical additional risk surveying exercises would have ended up with a significant credence on these (unless a bunch of other things were different). There were already some risk surveying attempts and they didn’t get there (e.g. in early 2022, metaculus had a 1% chance of FTX making any default on customer funds over the year with ~40 forecasters). I also think that even if someone had concluded there was e.g. a 10% chance of this, it would have been hard to do something about it ahead of time that would have made a big difference. This post was impressive for making the connection between a crypto crash and a crash in SBF’s reputation.

This has been the worst setback the community has ever faced. And it would make sense to me if many want to take some kind of step back, especially from the community as it exists today.

But the EA community is still tiny. Now looking to effective altruism’s second decade, there’s time to address its problems, and build something much better. Indeed, now is probably going to be one of the best ever opportunities we’re going to have to do that.

I hope that even if some people step back, others continue to try to make the effective altruism the best version of itself it can be – perhaps a version that can entertain radical ideas, yet is more humble, moderate and virtuous in action; that’s more professionalised; and that’s more focused on competent execution of projects that concretely help with pressing problems.

I also continue to believe the core values and ideas make sense, are important and are underappreciated by the world at large. I hope people continue to stand up for the ideas that make sense to them, and that these ideas can find more avenues for expression – and help people to do more good.