A common criticism of EA/rationalist discussion is that we reinvent the wheel—specifically, that concepts which become part of the community have close analogies that have been better studied in academic literature. Or in some cases, that we fixate on some particular academically sourced notion to the exclusion of many similar or competing theories.
I think we can simultaneously test and address this purported problem by crowd sourcing an open database mapping EA concepts to related academic concepts, and in particular citing papers that investigate the latter. In this thread I propose the following format:
‘Answers’ name an EA or rat concept either that you suspect might or know has mappings to a broader set of academic literature.
Replies to answers cite at least one academic work (or a good Wikipedia article) describing a related phenomenon or concept. In some cases, an EA/rat concept might be an amalgam of multiple other concepts, so please give as many replies to answers as seem appropriate.
Feel free but not obliged to add context to replies (as long as they link a good source)
Feel free to reply to your own answer
I’ll add any responses this thread gets to a commentable Google sheet (which I can keep updating), and share that sheet afterwards. Hopefully this will be a valuable resource both for fans of effective altruism to learn more about their areas of interest, and for critics to asserting the reinventing-of-wheelness of EA/rat to prove instances of their case (where an answer gets convincing replies) or refute them (where an answer gets no or only loosely related replies).
I’ll seed the discussion with a handful of answers of my own, most of which I have at best tenative mappings.
Development Economics
One of the forum’s highest rated posts is about how we should simply improve economic growth in poor countries
I don’t see how this is reinventing the wheel? The post makes many references to development economics (11 mentions to be precise). It was not an instance of independently developing something that ended up being close to development economics.
I saw a lot of criticism of the EA approach to x-risks on the grounds that we’re just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we’re supposedly reinventing, and they weren’t the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).
Tabooing your words
It would be helpful if you mentioned who the original inventor was.
Global catastrophic risk studies
I had the impression there was a field of ‘(global) resilience studies’ I’d seen before, but on a first look at the moment can’t find anything convincingly on point.
Pretty sure EA basically invented that (yes people were working on stuff before then and outside of it, but still that seems different to ‘reinventing the wheel’)
Longtermism
Progress studies
No? cf. this dialogue between Jason Crawford and Clara Collier, Max Daniel’s post (and this thread with Jason), Jason’s attempt to find the crux between PS and x-risk communities, etc
Scout mindset
Moloch
Tragedy of the Commons
Not really, “coordination failure due to positional arms race” is better.
I’m not sure I take a throwaway comment by someone closely socially tied to the author of the comment as evidence that it isn’t equivalent.
Also it doesn’t need to be literally equivalent to them. The criticism, if there is one, would be that Scott’s concept doesn’t add anything to the work done by academics—although that criticism would be false if it unified hitherto un-unified fields in a useful way.
That’s fair, no need to take it.
Stuart Armstrong (author of the OP in the link above) seems to think it was academically inspiring, cf. the passage starting with
Not sure if that counts for you.
(I’m not socially tied to Luke in any way. I had the same misconception as you a long time ago, remember reading that comment as clarifying, and thought you would appreciate the share.)
Moloch is just a fanciful term for coordination traps right?
Do you have a citation for coordination traps specifically? Coordination games seem pretty closely related, but Googling for the former I find only casual/informal references to it being a game (possibly a coordination game specifically) with multiple equilibria, some worse than others, such that players might get trapped in a suboptimal equilibrium.
Not really; rationalist jargon is often more memetically fit than academic jargon so it’s often hard for me to remember the original language even when I first learned something from non-rationalist sources. But there’s a sense in which the core idea (Nash equilibria may not be Pareto efficient) is ~trivial, even if meditating on it gets you something deep/surprising eventually.
I don’t really think of presenting this as Moloch as “reinventing the wheel,” more like seeing the same problem from a different angle, and hopefully a pedagogically better one.
ITN framework
Cluster of thingspace
The Sorites Paradox
This is very different. I’d reference Wittgenstein’s Family Resemblances instead.
The Telephone Theorem
Noisy channel coding theorem