Now that the 2018 Drive is over, what EAs should do in 2019 will depend on the terms of the match (if it even gets offered again). As soon as the Drive starts, I plan to get a clear answer to how it works, though the more people who ask, the better!
Some possibilities for how it might be run:
The Drive is truly counterfactual (every dollar you give = an extra dollar from the sponsors)
The Drive only affects distribution, not amount (your money just influences where and not whether sponsors give their funds)
Somewhere in between (e.g. only funds beyond $2 million lead to additional matching from sponsors, because they plan to give $2 million no matter what)
This year, it seems like the Drive turned out to be counterfactual for all money raised after $2.4 million, but not necessarily before (we don’t actually know).
If the Drive is “truly counterfactual”, or is likely to reach the amount above which extra funds will be counterfactual, it is a good opportunity. This would mean that EAs should strongly consider saving up to donate through the Drive, especially if they may not easily be able to do Facebook donation matching (e.g. because they are European and there’s a higher risk that their banks will reject a donation through Facebook).
However, we do want to be sure we don’t flood the Drive with so many donations that the sponsors feel reluctant to run it in future years. If that becomes a concern, it’s not something individuals need to worry about (unless you’re donating mid-five-digit sums or more), but CEA and other orgs may share it around less widely. We’ll see what the terms are this year, though.
Notes on this, from someone who was fairly involved in the GT process:
Even if competition for the Facebook match increases, the amount of data we gathered this year should help us be better-prepared next year, so the “base” percentage of a match should be above 65%, as long as you trust yourself to follow best practices around donating quickly.
Non-Americans had a much harder time getting matched by Facebook for some reason (probably banking/credit card authorization issues). They should take this into account when planning donations.
Other large matching campaigns sometimes pop up, mostly but not only during Giving Season. It’s good to keep an eye out for those (as the community does now) and be ready to move on an opportunity if it happens mid-year.
This also implies that finding out whether a match is actually counterfactual can be a really big deal for the community; I wish I’d worked harder to confirm with the Double Up Drive team whether their match was counterfactual (I think the answer turned out to be “yes”, in which case I should have done more promotion, but I’m not actually sure).
There are other good reasons to donate either throughout the year (e.g. gives charities better info, smoother cashflow) and at year’s (e.g. many non-EA people are thinking about giving, you might help to influence them by discussing your donations in public).
It seems valuable for someone to write up a more detailed document on timing considerations: “give now or give later” is a popular question, but often implies giving many years later; “when to give in the next 12 months” is very different.
One more thing which seems important: There are other ways to optimize a donation besides timing! Once you know how much you’ll give, and where, you have many options for how to share that information; you can write about it, post on social media, set up your own “match” for friends (make it truly counterfactual, and try to discourage EA people from using up matching funds that might instead attract non-EA people)
I second the suggestion to add a summary at the top of the post.
The Forum has a feature that it took me a while to notice: On pages that show lists of posts, each post has an estimated reading time. The time for this post, for example, was “20m”. If someone is thinking of investing 2o minutes in a post (and that number is likely conservative if they need to pause, think, go back, etc.), giving them a summary can be really valuable in helping them make that decision.
Thanks for specifying how you want answers delivered! This isn’t about any one movement (it’s a meta-answer), but I’ll post it here because I think it points to action that you or someone else could take to resolve the question.
Rather than trying to think of all the movements/groups I can and filtering by “seems synergistic”, I’ll try to break this question down.
Any group with which we share some common trait might have synergy; the number of common traits should correlate with the level of synergy.
Some traits of EA:
Cares about charity
Cares about career choice
Cares about using evidence
Cares about maximizing output/ being efficient
Cares about certain “neglected” groups: The global poor, farm/wild animals, people in the future
Cosmopolitan, with a focus on the entire world/“grand strategy”
Political lean toward economic conservatism and social liberalism
Charities themselves may not be huge fans of us if they see us as critical/rivals, but people trying to donate are synergistic with us. Which groups of people spend a lot of time trying to donate? People with a lot of money, people planning their legacies, people who work as charitable consultants, etc.
You can do the same thing for each list item, and try to notice which groups fall into multiple categories or have “anti-categories” they expressly don’t fit:
Many Buddhists care about the global poor, cosmopolitanism, and charity.
Libertarians like efficiency, economic conservatism, and social liberalism.
College students like social liberalism, career choice, and cosmopolitanism (but aren’t big on economic conservatism). And so on.
If you do wind up building a list in this way, you should share it on the Forum more generally! I’ve wanted a resource like this for a while but haven’t had time to build it carefully.
Some options that come to mind:
1. Increase the amount of funding available for biologists working on projects within EA-aligned areas (neglected tropical diseases, pandemic prevention, longevity, etc.)
2. Create a professional network for biologists working on said projects and hold events
3. Invite biologists who receive Open Phil or other EA grants to attend EA Global (with free tickets/travel)
4. Something anyone reading this might be able to do: Find biologists working on cool things and make them feel appreciated. Try to understand their work, share their work enthusiastically (to the extent that you understand it), tell them they’re making a difference, and recommend they look into any EA funding options which might be relevant for them. (Be selective in this last case; you don’t want anyone to waste their time applying for grants that aren’t actually a good fit for their projects.)
In general, people either find EA because they like the general mission or because EA contains a lot of work/people relevant to something they liked already. If you’re thinking about a particular interest group (like biologists), think about what biologists value, and ways to let them know the EA community has those things.
Thanks for responding, kbog!
For future reference, we recommend posting answers in the “New Answer” section, rather than as comments. The comment section is meant for asking clarifying questions, or for thoughts that aren’t actually answers. (This is a new feature, so we know it takes some getting used to!)
Thanks for responding, Jemma!
I looked through this list to see which ideas might already exist, or be immediately feasible without building anything new. This caught my eye:
A vetting system for project ideas
What features would this system have that “posting a Google Doc on the EA Forum” doesn’t have? Doing so allows you to choose who can or can’t see it, present your idea in as much detail as you’d like, see how much the EA community likes it in general, get feedback from experts, etc. Would it be helpful to have a centralized space only for project ideas?
(There are, of course, project-management apps that are much better than Google Docs for actually implementing projects, but I’m not aware of any specialized software just for getting feedback on an initial idea.)
CEA is trying to make the Forum the best place to post EA content, in the sense that this is generally where you’ll find the most readers and get the best feedback. We’d hope that “EA projects” are exactly the kind of thing that get posted here, so if there’s a way in which we could add features to the Forum which would make that easier, we’d be interested in hearing about it!
I don’t hold an especially high opinion of Lomborg’s epistemics, since I’ve seen some pretty sharp critiques of The Skeptical Environmentalist (not sure about his newer work). But since the CC reports were mostly produced by non-Lomborg people, that doesn’t influence my view of them very much.
However, I agree with other responses that collaborating with CC comes with a degree of risk given Lomborg’s status as a controversial figure. I think it’s worth trying to learn from their work, but I don’t have any particular view on working with them directly.
Yes. That’s currently how our cross-posting program works. Nate’s blog isn’t active at the moment, but he let us know that we could cross-post old material.
I call this “timeboxing”, and it’s been really useful to me when I can bring myself to do it. I’ll also note that Giving What We Ca has acknowledged that they should have spent less time on certain research:
Giving What We Can research spent too many resources evaluating the same interventions and organizations that GiveWell was evaluating.
You’re bringing up a lot of questions that are core to the EA movement, and which have been debated in many different places. The links from CEA’s strategy page might interest you; they go into CEA’s models of how to build communities, and where “impact” comes from.
In general, there’s no simple answer to how much a person’s personal values matter for their potential impact. To give a simplistic example, value alignment with EA seems more important for a moral philosopher (whose work is all about their values) than for a biologist (if someone decides to work on anti-aging research because they want to win a Nobel Prize and think Aubrey de Grey has a cool beard, they may still do excellent, world-shaping work despite non-EA motives).
You may want to check your intuition that older generations are more value-driven against data; older people tend to be more religious, but younger people tend to give “better” answers on many important moral questions (look up “the expanding moral circle” for more on this idea). Meanwhile, the extent to which people make sacrifices to act on their values seems to fluctuate from generation to generation; political protests go from popular to unpopular to popular again, people worry less about pollution but more about eating meat, etc.
Thanks to modern communication systems and growing moral cosmopolitanism throughout the world, this is probably the best time in history to promote something like EA, and conditions are getting better every year.
Even though my logical belief towards altruism (stemming from no longer valuing intrinsically the happiness of a stranger) is gone, my heart will always want to help those who really need help through effective altruism. I don’t think that’s good enough though and really hope somebody can reconvince me to believe logically in altruism instead of just emotionally.
Maybe doing what your heart wants to do is “good enough”, if a lot of people who seem very logical and reasonable to you have come to the same conclusion through more “logical” routes?
I’ve been involved with EA for four years and work full-time at an EA organization, but I still wouldn’t call my commitment to EA an especially “logical” one. I’m one of those unusual people (though they’re much more common within EA) who grew up with a strong feeling that others’ happiness mattered as much as mine; I cried about bad news from the other side of the world because I felt like children starving somewhere else could just as easily have been me.
I reached that conclusion emotionally—but when I went to college and began studying philosophy, I realized that my emotional conclusion was actually also supported by many philosophers, plus thousands of other people from all walks of life who seemed to be unusually thoughtful in their other pursuits. Seeing this was what convinced me I’d probably found the right path, and I haven’t seen strong evidence against EA being broadly “correct” since I joined up.
So even if you don’t “logically” value the happiness of strangers, I think it’s safe to trust your heart, if doing so is leading you to a path that seems better for the world, and you’re still using logic to make decisions along that path. Even if you get lost in a strange city and stumble upon your destination by accident, that doesn’t mean you need to leave and find your way back using a map.
Habryka: Did you see this line in the introduction of this post?
We also recommend charities that are highly cost-effective in improving women’s lives but do not focus exclusively on women’s empowerment. We discuss these organisations, including those recommended by our research partner GiveWell, in other research reports on our website.
On the other hand, it does seem like a specific GiveWell charity or two should have shown up on this list, or that FP should have explicitly noted GiveWell’s higher overall impact (if the impact actually was higher; it seems like GiveDirectly isn’t clearly better than Village Enterprise or Bandhan at boosting consumption, at least based on my reading of p. 5o of the 2018 GD study, which showed a boost of roughly 0.3 standard deviations in monthly consumption vs. 0.2-0.4 SDs for Bandhan’s major RCT, though there are lots of other factors in play).
I think I’ve come halfway around to your view, and would need to read GiveWell and FP studies much more carefully to figure out how I feel about the other half (that is, whether GiveWell charities really do dominate FP’s selections).
I’d also have to think more about whether second-order effects of the FP recommendations might be important enough to offset differences in the benefits GiveWell measures (e.g. systemic change in norms around sexual assault in some areas—I don’t think I’d end up being convinced without more data, though).
Finally, I’ll point out that this post had some good features worth learning from, even if the language around recommending organizations wasn’t great:
The “why is our recommendation provisional” section around NMNW, which helped me better understand the purpose and audience of FP’s evaluation, and also seems like a useful idea in general (“if your values are X, this seems really good; if Y, maybe not good enough”).
The discussion of how organizations were chosen, and the ways in which they were whittled down (found in the full report).
On the other hand, I didn’t like the introduction, which used a set of unrelated facts to make a general point about “challenges” without making an argument for focusing on “women’s empowerment” over “human empowerment”. I can imagine such an argument being possible (e.g. women are an easy group to target within a population to find people who are especially badly-off, and for whom marginal resources are especially useful), but I can’t tell what FP thinks of it.
You make many good points here! One note: I’d suggest changing the title of the piece, which is quite ambiguous at the moment. Maybe something which refers to the topic of bipartisanism, or to journalism that isn’t careful enough with logic or statistics?
Do your “correct answer” numbers correct for the people who put something like “no answer” or “prefer not to answer”?
I’d guess that most survey respondents were actually guessing something like “percentage of people who give an answer, and for whom the answer is X”, even if they were supposed to be guessing “percentage of all people who answer X”.
Thanks, JP! I’ve always had more questions than I knew what to do with, and now I know what to do with them.
Thanks for writing this out, Habryka!
These are all important considerations, and while I disagree about the strength of the methodology (it seems stronger than that of many posts I’ve seen be popular on the Forum), I agree that having a more comparison-friendly impact measure would have been good, as well as a justification for why we should care about this subfield within global development.
I’m not sure how the Forum should generally regard “research into the best X charity” for values of “X” that don’t return organizations with metrics comparable to the best charities we know of.
On the one hand, it can be genuinely useful for the community to be able to reach people who care about X by saying “with our tools, here’s what we might tell you, but if you trust this work, maybe also look at Y”.
On the other hand, it may drain time and energy from research into causes that are more promising, or dilute the overall message of EA.
I guess I’ll keep taking posts like this on a case-by-case basis for now, and I thought this particular case was worth a (non-strong) upvote. But I have a better understanding of why one might come to the opposite conclusion.