Milan Griffes on EA blindspots

Milan Griffes (maɪ-lɪn) is a community member who comes and goes—but so he’s a reliable source of very new ideas. He used to work at Givewell, but left to explore ultra-neglected causes (psychedelics for mental health and speculative moral enhancement) and afaict also because he takes cluelessness unusually seriously, which makes it hard to be a simple analyst.

He’s closer to nearby thinkers like David Pearce, Ben Hoffman, Andres Gomez Emilsson, and Tyler Alterman who don’t glom with EA for a bunch of reasons, chiefly weirdness or principles or both.

Unlike most critics, he has detailed first-hand experience of the EA heartlands. For years he has tried to explain his disagreements, but they didn’t land, mostly (I conjecture) because of his style—but plausibly also because of an inferential distance it’s important for us to bridge.

He just put up a list of possible blindspots on Twitter which is very clear:

I think EA takes some flavors of important feedback very well but it basically can’t hear other flavors of important feedback [such as:]

  1. basically all of @algekalipso’s stuff [Gavin: the ahem direct-action approach to consciousness studies]

  2. mental health gains far above baseline as an important x-risk reduction factor via improved decision-making

  3. understanding psychological valence as an input toward aligning AI

  4. @ben_r_hoffman’s point about seeking more responsibility implying seeking greater control of others /​ harming ability to genuinely cooperate

  5. relatedly how paths towards realizing the Long Reflection are most likely totalitarian

  6. embodied virtue ethics and neo-Taoism as credible alternatives to consequentialism that deserve seats in the moral congress

  7. metaphysical implications of the psychedelic experience, esp N, N-DMT and 5-MeO-DMT

  8. general importance of making progress on our understanding of reality, a la Dave. (Though EA is probably reasonably sympathetic to a lot of this tbh)

  9. consequentialist cluelessness being a severe challenge to longtermism

  10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang

  11. EA correctly identifies improving institutional decision-making as important but hasn’t yet grappled with the radical political implications of doing that

  12. generally too sympathetic to whatever it is we’re pointing to when we talk about “neoliberalism”

  13. burnout & lack of robust community institutions actually being severe problems with big knock-on effects, @ben_r_hoffman has written some on this

  14. declining fertility rate being concerning globally and also a concern within EA (its implications about longrun movement health)

  15. @MacaesBruno’s virtualism stuff about LARPing being what America is doing now and the implications of that for effective (political) action

  16. taking dharma seriously a la @RomeoStevens76′s current research direction

  17. on the burnout & institution stuff, way more investment in the direction @utotranslucence’s psych crisis stuff and also investment in institutions further up the stack

  18. bifurcation of the experience of elite EAs housed in well-funded orgs and plebeian EAs on the outside being real and concerning

  19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning

  20. psychological homogeneity of folks attracted to EA (on the Big 5, Myers-Briggs, etc) being curious and perhaps concerning re: EA’s generalizability

  21. relatedly, the “walking wounded” phenomenon of those attracted to rationality being a severe adverse selection problem

  22. tendency towards image management (e.g. by @80000Hours, @open_phil) cutting against robust internal criticism of the movement; generally low support of internal critics (Future Fund grant-making could help with this but I’m skeptical)

[Gavin editorial: I disagree that most of these are not improving at all /​ are being wrongly ignored. But most should be thought about more, on the margin.

I think #5, #10, #13, #20 are important and neglected. I’m curious about #2, #14, #18. I think #6, #7, #12, #15 are wrong /​ correctly ignored. So a great hits-based list.]