Milan Griffes (maɪ-lɪn) is a community member who comes and goes—but so he’s a reliable source of very new ideas. He used to work at Givewell, but left to explore ultra-neglected causes (psychedelics for mental health and speculative moral enhancement) and afaict also because he takes cluelessness unusually seriously, which makes it hard to be a simple analyst.
Unlike most critics, he has detailed first-hand experience of the EA heartlands. For years he has tried to explain his disagreements, but they didn’t land, mostly (I conjecture) because of his style—but plausibly also because of an inferential distance it’s important for us to bridge.
nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
EA correctly identifies improving institutional decision-making as important but hasn’t yet grappled with the radical political implications of doing that
generally too sympathetic to whatever it is we’re pointing to when we talk about “neoliberalism”
burnout & lack of robust community institutions actually being severe problems with big knock-on effects, @ben_r_hoffman has written some on this
declining fertility rate being concerning globally and also a concern within EA (its implications about longrun movement health)
@MacaesBruno’s virtualism stuff about LARPing being what America is doing now and the implications of that for effective (political) action
taking dharma seriously a la @RomeoStevens76′s current research direction
on the burnout & institution stuff, way more investment in the direction @utotranslucence’s psych crisis stuff and also investment in institutions further up the stack
bifurcation of the experience of elite EAs housed in well-funded orgs and plebeian EAs on the outside being real and concerning
worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning
psychological homogeneity of folks attracted to EA (on the Big 5, Myers-Briggs, etc) being curious and perhaps concerning re: EA’s generalizability
relatedly, the “walking wounded” phenomenon of those attracted to rationality being a severe adverse selection problem
tendency towards image management (e.g. by @80000Hours, @open_phil) cutting against robust internal criticism of the movement; generally low support of internal critics (Future Fund grant-making could help with this but I’m skeptical)
[Gavin editorial: I disagree that most of these are not improving at all / are being wrongly ignored. But most should be thought about more, on the margin.
I think #5, #10, #13, #20 are important and neglected. I’m curious about #2, #14, #18. I think #6, #7, #12, #15 are wrong / correctly ignored. So a great hits-based list.]
Milan Griffes on EA blindspots
Milan Griffes (maɪ-lɪn) is a community member who comes and goes—but so he’s a reliable source of very new ideas. He used to work at Givewell, but left to explore ultra-neglected causes (psychedelics for mental health and speculative moral enhancement) and afaict also because he takes cluelessness unusually seriously, which makes it hard to be a simple analyst.
He’s closer to nearby thinkers like David Pearce, Ben Hoffman, Andres Gomez Emilsson, and Tyler Alterman who don’t glom with EA for a bunch of reasons, chiefly weirdness or principles or both.
Unlike most critics, he has detailed first-hand experience of the EA heartlands. For years he has tried to explain his disagreements, but they didn’t land, mostly (I conjecture) because of his style—but plausibly also because of an inferential distance it’s important for us to bridge.
He just put up a list of possible blindspots on Twitter which is very clear:
[Gavin editorial: I disagree that most of these are not improving at all / are being wrongly ignored. But most should be thought about more, on the margin.
I think #5, #10, #13, #20 are important and neglected. I’m curious about #2, #14, #18. I think #6, #7, #12, #15 are wrong / correctly ignored. So a great hits-based list.]