This is great!
Do you have any “impact stories” to share about the group? That is, people who signed the Pledge but would not have if not for EA London, or donations given that otherwise would not have been, etc.? Getting 250 people into a Facebook group is definitely a good thing, and worth replicating, but any tricks for turning semi-passive followers into an active/impactful community would also be welcome.
The cost, or our understanding of the cost? I don’t think diminishing marginal utility has been achieved in such a drastic way; I think that the old Peter Singer quote from which that number originated has been taken out of context for decades. I could be wrong, though!
Good list! Makes me wonder whether there’s some way to model the expected level of adequacy in a field. Factors we’d have to consider:
How much money is available within the field?
How much prestige is available within the field?
How many people are there in the field?
How much do participants care about the field for non-monetary, non-prestige reasons? That is, how inherently fun is it to work within this field?
How hard is it to work within the field? That is, to what extent do skills other than “having good ideas” matter? (Some scientific fields invest hundreds of hours of grunt work in every paper, while a philosophy paper requires little effort outside of writing.)
How fast does new information enter the field, compared to the amount of information that exists within the field already?
How many competing ideas/groups exist within the field, and how easy is it for a new idea/group to get started?
As a toy example, you could look at the metagame for the Modern format of Magic: The Gathering, which consists of roughly one hundred thousand players (maybe ten thousand of whom are really serious about winning). Rewards for being in the top 1% of the serious group equate to a few thousand dollars in profit per year and a few dozen fans; for the top 0.1%, a few tens of thousands of dollars in profit per year and a few thousand fans, plus a solid chance at a steady job if you want it (producing Magic-related media, working as a designer, etc.).
It’s possible to generate a good burst of fame and profit by creating a new deck that matches up well against current popular decks (the “metagame”).
Information enters the field rapidly, at a pace of a few hundred new cards (into a pool of 11,000) every three months. Building a new deck and thoroughly testing it against the metagame might take a few hundred dollars and a few dozen hours, but the cost is balanced by the fact that playing Magic: The Gathering is a lot of fun. The competitors you need to worry about are people who spend about 50 hours/week playing, and who are as skilled or a little more skilled than you are at the “basics” of the game. Maybe 5% of new decks that are tested this thoroughly turn out to generate any kind of positive return, relative to playing a deck someone else designed already. And so on.
...at the end, we can observe that a new, good Modern deck (one capable of winning a 500-player, $5000 tournament) comes out once every couple of months, and that cards from sets that are more than one year old are almost never at the center of “new, good decks”, relative to newer sets. Magic: The Gathering seems to be an adequate market; any new ideas are absorbed quickly, and few people really get a chance to profit off of them. The evaluation system for cards and decks optimizes more slowly than the evaluation system for stocks, but more quickly than the evaluation system for Major League Baseball players circa 1990.
I’d be interested in viewing examples for other fields, with better data collection, perhaps gathered into some kind of “adequacy” database.
People who have the requisite talent level to be a “top” or “senior-level” hire seem to be rare in general, given that there’s a huge market for recruiting organizations whose only job is to refer promising senior-level hires to companies who will pay a five-or-six-figure bounty if they actually manage to hire someone at that level.
How many people connected to effective altruism are at this level, AND are not involved with some other key EA project, AND do not already have a job that generates enough money that they’d be very unlikely to take a low-salary job at a small EA organization? (Even if you care a lot about impact, it’s probably tempting to make $150,000 and donate $50,000 for “someone else” to make that impact, rather than to take the $50,000 job yourself.)
It seems like we’re talking around one aspect of the problem: What, exactly, defines a “top hire”? What are the differences between that person and the average enthusiastic recent college graduate? How many of those differences can be remedied with an internship and some skills training, and how many are inherent features of the way someone “turned out” after their first twentysomething years of being alive? What fraction of the EA population—among people who are willing to go in for unpaid training and don’t already have great jobs/positions—might actually be able to become “top hires” with a reasonable amount of training?
I’d be interested to hear your thoughts on that, Joey. Having run a few small organizations myself, I’ve worked with people who were reliable vs. unreliable, or who had good vs. bad instincts, and I know what my own criteria look like, but I don’t have a good sense for how many people actually fit those criteria, since I’ve done very little direct “hiring” (these were student orgs, so anyone who wanted to join was welcome).
One Molochian factor that was briefly mentioned in the dead-baby example: The people most skilled at generating outrage, at least until good-aligned organizations get good at training people to generate outrage, will typically generate outrage about more-or-less random topics that happen to affect them.
See, for example, the one-man campaign by a heart surgeon, whose wife died due to very rare complications, to reduce the odds of those rare complications ever happening—and getting unusually rapid support from the FDA, because he made a Change.org petition and writes in a style that is accessible, yet sufficiently medical-sounding, to draw attention from many different groups.
(I’m no medical expert, but the surgeon’s suggestions are controversial, and many doctors seem to think they’ll cause more harm than good by squeezing out the good uses of the procedure which caused the complications.)
If this person had been the father of a child who died of parenteral nutrition-associated liver disease, the FDA might well have acted on that issue instead. But it’s hard to point people like this in the “right direction”.
I did like Holden’s post of that name, though it would be easy to mangle the concept in translation.
One better way to phrase it might be in historical perspective: “If someone wanted to help people as much as they could a hundred years ago, they might be able to volunteer in their town, and—unless they were Cornelius Vanderbilt—that would be about as good as it got. Now, we know a lot more about the world, which means we can help a lot more people, and make better use of whatever time and energy we’d like to give.”
These are all composites—I’m not giving these exact speeches, but I might borrow different examples at different times to use in conversations.
When used in context, with the specific people I think are likely to respond best, bits and pieces of these frames been fairly effective; something like 20 people I’ve introduced to this have gone on to donate some amount of money to an EA-approved charity.
The idea of using “different moral perspectives” is specifically to convince as wide a range of people as possible. Too many common EA arguments assume that everyone is consequentialist, deep down. But you do have to match the perspective to the person—otherwise, the conversation can certainly backfire!
I strongly second this. This doesn’t even have to mean direct EA work—I think you learn a lot even by volunteering for non-EA causes (a few hours knocking on doors for a political candidate, an evening at a soup kitchen, etc.). It’s good to see how nonprofits of all stripes organize their events and volunteers, and also good to be able to discuss the different nonprofit experiences you’ve had. (It’s easy to come across as “do-nothing philosopher idly speculating” when you talk about EA with someone who spends every weekend volunteering, and that’s not a good look.)
To provide some context for this discussion, here’s a 2017 overview of the cause prioritization landscape (not an intellectual summary—more about the way resources are distributed, and what happens to the output).
That summary notes that existing cause-prioritization research is rarely used by non-EAs, but has influenced some government funding when it was spread by other parties (e.g. the Copenhagen Consensus Center talking to the British government). If a journal did come to exist for cause prioritization, much of its impact might come from how the results are shared, rather than the existence of the results in a journal format. And the EA community already has routes to sharing our results—so to me, the main question at hand is: “How do we get better results?” Or, as the OP put it, how do we make intellectual progress?
If we want to focus on accelerating progress and helping discussions not become “lost”, a journal doesn’t seem like the optimal format. Something like the Cause Prioritization Wiki, which allows for rapid updating and the aggregation of content in a single place (rather than scattered through many articles) seems better for those goals.
This makes it a bit harder for some outsiders (e.g. academics) to contribute, but makes it much easier for non-academics to incorporate academic information into summaries. I suspect that an approach of “help EAs find good research and add it to our databases” would go better than an approach of “help good researchers find EA and publish in our journal”, but each plan has its own pro/con list.
HPMOR and Privileging the Question got me into Less Wrong, and started me thinking about the idea that the problems I’d been hearing about weren’t necessarily the problems that would be best to work on.
From there, Money: The Unit of Caring and Efficient Charity: Do Unto Others helped me get interested in GiveWell.
I can’t think of a particular GiveWell article that pushed me further toward EA, though Excited Altruism helped me frame the way I was feeling about all of the ideas. Mostly, as I read their charity evaluations (and their past history of seeing certain charities, like VillageReach, underperform), I realized that glib assertions about impact were often wrong, and that deciding this sort of thing *correctly* -- in the absence of a functioning market—was going to be difficult and require that I rely on outside experts to some extent.
The last big step to get me fully enmeshed in the community was starting a student group. The articles with the most influence on that decision were Ben Kuhn’s reflections on starting the Harvard EA group.
I’m also quite unsure about this, largely because there’s an enormous visibility bias; I can think of a few people within EA who did become world-class at something, but barely anyone who tried and failed. But this is something I hope to collect some anecdata on in the future (and through comments on the post).
One more important factor is the extent to which being a part of the EA community might help your chances of getting to a world-class level in something (through access to mentorship, access to lots of people who can help you stick to your goals and spread word about your work, etc.).
This may not be true often, if at all—hanging out with musicians seems like a much better idea than hanging out with EAs if your goal is to win fame as a musician—but I can see 80,000 Hours having some interesting general insights if they ever decided to research the general area of “becoming famous”. (This seems antithetical to most of their current thinking, and would probably not be a good use of their
Thanks for this feedback! I’d tried to define “credences” implicitly, as the things you have after you update, but I think that making it more clear in the final article will be really helpful (especially given that I’m using the term in an unusual, perhaps inadvisable, way).
It’s definitely rare in practice. I’d imagine that we could change this by using some kind of stock phrase that works as a “pause” button in a conversation.
For example: “I hear what you’re saying. But before we get farther, can I share what I thought coming into the conversation? I’m already starting to change my mind, but I think it would be useful to clarify where we both started.”
But that’s pretty long, so hopefully we could eventually condense the idea to something like “let’s both give our impressions before we update” or “tell me what you think, I’ll tell you what I’ve been thinking, and then we can talk it out”. Or someone can come up with a catchy acronym!
Thanks for this feedback, especially the suggested definitions. Your thoughts will definitely be incorporated into the final version. (Also, we should probably be linking to Thinking Fast and Slow all over the Concepts site—that was a useful reminder!)