I research the psychology of effective altruism and longtermism.
Lucius Caviola
Thanks Ben!
13.6% (3 people) of the 22 students who clicked on a link to sign up to a newsletter about EA already knew what EA was.
And 6.9% of the 115 students who clicked on at least one link (e.g. EA website, link to subscribe to newsletter, 80k website) already knew what EA was.
Another potentially useful measure (to get at people’s motivation to act) could be this one:
“Some people in the Effective Altruism community have changed their career paths in order to have a career that will do the most good possible in line with the principles of Effective Altruism. Could you imagine doing the same now or in the future? Yes / No”
Of the total sample, 42.9% said yes to it. And of those people, only 10.4% already knew what EA was.
And if we only look at those who are very EA-sympathetic (scoring high on EA agreement, effectiveness-focus, expansive altruism and interest to learn more about EA), the number is 21.8%. In other words: of the most EA-sympathetic students who said they could imagine changing their career to do the most good, 21.8% (12 people) already knew what EA was.
(66.3% of the very EA-sympathetic students said they could imagine changing their career path to do the most good.)
A caveat is that some of these percentages are inferred from relatively small sample sizes — so they could be off.
We’ve asked them about a few ‘schools of thought’: effective altruism, utilitarianism, existential risk mitigation, longtermism, evidence-based medicine, poststructuralism (see footnote 4 for results). But very good idea to ask about a fake one too!
(Note that we also asked participants who said they have heard of EA to explain what it is. And we then manually coded whether their definition was sufficiently accurate. That’s how we derived the 7.4% estimate.)
Most students who would agree with EA ideas haven’t heard of EA yet (results of a large-scale survey)
We considered this too. But the significant correlations with education level and income held even after controlling for age. (We mention this below one of the tables.)
I see that it may seem surprising at first glance that education doesn’t correlate positively with our two scales. (Like David, I am not sure if the negative correlation will hold up.) It seems surprising because we know that most existing highly engaged EAs are highly educated (and likely have high cognitive abilities). But what this lack of positive correlation shows is simply that high education (and probably also high cognitive abilities) is not required to intuitively share the core moral values of EA.
As we point out in the article, there are likely several additional factors that predict whether someone will become a highly engaged EA. And it’s possible that education (and likely high cognitive abilities) is such an additional, and psychologically separate, factor.
Just to add to what David said: It’s difficult to say whether our NYU business sample or our MTurk sample is more representative of our primary target audience. The best way to find out is to do a large representative survey, e.g., amongst students at a top uni (of all study subjects—not just business).
What psychological traits predict interest in effective altruism?
Yes, it was initially quite surprising that so many donors are willing to support the matching system. We found similar results when we tested it with MTurk participants (who were given a small bonus which they could give or keep; see Study 7). One possibility is that it’s a kind of intergenerational reciprocity tendency, where people who benefited from the generosity of previous donors want to pay it forward to the next ones.
Thanks!
Perhaps, but we are uncertain. It depends on whether we can find a scalable strategy for reaching donors who are amenable to EA but not yet engaged with effective altruism. Such a strategy might come from paid advertising, further earned media coverage (our strategy so far), or from the formation of institutional relationships (e.g. with businesses, universities, or wealth managers) who offer guidance or incentives for charitable giving.
Yes, we’ve recently introduced our donors to GWWC. (Results of that campaign are not in yet.)
Thanks, Linch.
First, you’re right that several EA psychology researchers are studying how people donate to charity. But most of them (including myself) are also studying other EA-related topics, such as the psychology of xrisk and longtermism, moral attitudes towards animals, etc. My hunch is that only a minority of currently ongoing EA psychological research projects have charitable giving as their primary topic of interest.
Second, as David pointed out, donation choices are a useful behavioral outcome measure when studying the public’s beliefs, attitudes and preferences about EA related issues more generally. In many cases, the goal of the research is not necessarily to understand how people donate to charity specifically but to understand the fundamental psychological drivers of and obstacles to EA-aligned attitudes and behavior more generally (example). Studying these in the context of charitable giving is an obvious and often straightforward first step — in the hope that these insights can be generalized.
For example, the fact that people are willing to split their donation, as described in the post, tells us something more fundamental about people’s preferences structure (the fact that most people value effectiveness but only as a secondary preference), the potential market size of EA in the general public, and possible routes for reaching a wider adoption of EA ideas. Another example is the study of individual differences: who are the people who immediately find EA ideas appealing, where can we find them and how should we target them? It’s natural to test this, in part, by observing people’s donation choices.
My view on prioritization is that psychological research can be useful when it yields such fundamental insights. But there can also be really useful applied research, such as marketing or psychometric research that can be practically useful for recruitment.
Giving Multiplier after 14 months
I don’t think our findings suggest that people have a preference for populations with higher variance in welfare (i.e. greater differences in how how happy they are). All else equal, people probably have a strong preference for fair welfare distribution (even in the US). But sometimes they may choose the option that contains more welfare variance because this population has a higher average or total level (or for some other reasons).
I agree with you that it would be very interesting to do a cross cultural study. I don’t have a specific hypothesis about cross cultural differences though. Note that there already exists some cross cultural research on fairness and prosocial behavior.
Thanks, these are great points!
As for your first question about the philosophical implications of this psychological research: In general, the primary goal of our project was a descriptive one and it would require a separate project (ideally lead by philosophers) to figure out what the possible normative implications are. I also believe that we need much more empirical research to understand in greater detail what exactly the psychological mechanisms are that drive people’s population ethical views. I see this as a very first exploration.
That said, I agree with much of what Jack says in the other comment. We should be cautious in simply accepting lay people’s intuitive reactions to these tricky moral dilemmas or even making our policies based on them. Most people’s reactions are very uninformed (most have never thought about these questions before), their reactions are often inconsistent, framing-dependent and — as we saw in some of our studies — people themselves tend to revise their opinions after more careful reasoning.
At the end of our paper, we say:However, this [the fact that people’s judgments are inconsistent and biased] does not mean that it is not valuable to examine lay people’s population ethical intuitions. Population ethics has important implications for policy making and global priority setting. Philosophers often rely on their own intuitions when discussing population ethics. An understanding of the psychology of these population ethical intuitions can therefore be informative. For example, greater awareness of the specific psychological mechanisms and biases driving these intuitions could elucidate which ones should be endorsed under reflection and which ones not. The apparent inconsistencies between some of these intuitions demonstrate that it may be impossible to formulate a population ethical theory that is both consistent and intuitive (cf. impossibility theorems; Arrhenius, 2000). One possible solution could be a debunking approach: attempting to understand the psychological underpinnings of different philosophical positions, with an eye to identifying those that result from unreliable or biased cognitive processes. This in turn allows the resolution of inconsistency by discounting certain intuitions as untrustworthy (cf. Greene, 2014). Another possible resolution is to accept the fact that we are internally conflicted and, as a consequence, uncertain which moral theory is right (MacAskill, Bykvist, & Ord, 2020).
As for your second question about the adding-people experiment (Studies 2a-b): You are right that participants may misinterpret our dilemmas and questions. This is a general issue with studying such abstract questions and we tried our best to make things as clear as possible to people. In most studies, for example, we double checked if people understood and accepted our assumptions (and excluded participants from the analyses who have failed these checks).
In Studies 2a-b, the question we asked was “In terms of its overall value, how much better or worse would this world (containing this additional person) be compared to before?” (1 Much worse − 7 Much better). Even though this seems pretty clear to me, I think you’re right that it’s possible that some participants also considered the indirect effects on other people it would have to add a new person. One reason why I believe our finding would largely stay the same, even if we ensured that participants did not take the indirect effects into account, is the empty world condition in Study 2b. (And this relates to your comment.) In Study 2b, we indeed had a condition where the initial world contained zero people (empty world) and another condition where the initial world contained 10 billion people (full world). And even in the empty world condition, where you’d expect such indirect-effect considerations to be ruled out, we still find the same pattern. (That being said, I believe it’s possible that a different question and different framing could yield different results.)
Regarding your comment, let me clarify: in Study 2a, the initial world contained 1 million people, but in Study 2b we tried to replicate this effect with a scenario where the initial world contained either zero people or 10 billion people. I believe this should be described correctly in the paper (if not, please let me know). But I noticed that there was an incorrect paragraph in our supplementary materials, which may have lead to this confusion and which I’ve now fixed (Thanks for making me aware of it!).
The psychology of population ethics
Virtues for Real-World Utilitarians
You are right that if someone only cares about their favorite charity, then donating through GM doesn’t give them any value. After all, GM never helps you to get more value for your favorite charity than you could get by donating directly to your favorite charity. But we also don’t claim that we do that. On our website, we say: “Give to both your favorite charity and a super-effective charity recommended by experts. We’ll add to your donations.” (The EA newsletter text frames things slightly differently and perhaps that’s indeed not the optimal way of promoting GM.)
But if someone cares about both their favorite charity and about giving effectively, donating through GM can get them more value. Keep in mind that our target audience are non-EA donors, many of whom haven’t heard of EA or about our highly effective charities before.
In our studies we find that many non-EA people (ca. half of our MechanicalTurk participants!) are willing to split their donation 50⁄50 between their favorite and a highly effective charity when they are offered such a splitting option even if no matching is offered. This shows that surprisingly many people do have a preference to give to effective charities. They just don’t know about effective charities yet and don’t consider the option to split their donation. The point of GM is to inform non-EA donors about effective charities and offer them this splitting option.
Suppose you have a donor who cares about their favorite charity and a very effective charity. They want to give 90% to their favorite and 10% to the effective charity. They could either donate directly to these two charities or they could donate through GM. If they donate through GM, the system adds on top of their donations.
The part that is added on top of their favorite charity is clearly counterfactual because the matching funder wouldn’t have given to that charity. The part that is added on top of the effective charity is less counterfactual because the matching funder would have given anyway to effective charities. But in expectation it is partly counterfactual because the donor can influence which specific effective charity this part of the funding should go to (and many donors may care much more about some effective charities than others). (Your efficient market hypothesis is interesting and I haven’t considered it. But I doubt that the market for effective charities is completely efficient.)
As Aaron pointed out, all of this is transparently explained on our FAQ page.
Do my matched donations have an impact?
Yes. The donors who provided the matching funding would likely not have donated to the specific charities that you have chosen. Therefore, by making a donation through Giving Multiplier, you don’t just decide to which charities your own money goes to but you also decide to which specific charities the added (i.e. matched) amounts—that were provided by the matching funders—go to. Note that most matching funders likely would have donated their amounts to a highly effective charity per default. But they would not have donated to your favorite charity and it’s unlikely that they would have donated to exactly the effective charity that you have chosen.
Our website is new and if there are ways to improve, we’d consider these. But to be clear: there is absolutely no intent of deceiving donors.
Yes, it’s a similar idea to the “Matching as donor coordination” idea I describe in this post. (Feel free to contact me if you have any thoughts.)
Thanks for these really helpful suggestions, Peter!
We are planning to test some of things you suggest. We kept our post-donation survey short because we wanted to focus on our main research question and not try to do too many things at once. But if we end up having a lot of donors, we might send them a survey via email to find out more about their demographics, beliefs and preferences. We’re not planning to do A/B testing at this point. But if we’re starting to get lots of donors, we’d definitely consider doing A/B testing to optimize the user experience and get more people to donate.
At this point, our primary goal is to test if the technique works in the real world and if we can get enough donors. Yes, we want to do media releases to get more traffic. And we are trying to partner with organizations and services to spread the word. I like your idea of reaching out to workplace giving services. If you have concrete ones in mind or have ideas how I could find these, please shoot me a DM!
This is a good point and we’ve considered it. I agree that there are advantages to allowing matchers to support only specific causes (or charities).
But there are also downsides. In addition to the ones you list below, the matching system would be somewhat less honest. Since the matcher would per default have donated to that cause/charity anyway, you as a donor don’t really influence where the matcher’s funding goes to. With our current system, in contrast, you do influence to which specific charity/cause the matcher’s funding goes to. But this comes at the costs of the matching funder, who has to be willing to support any of the nine effective charities we currently list.
I still think it’s worth thinking more about allowing for cause-specific matchings. But we don’t plan to implement it anytime soon.
The Global Risk Behavioral Lab is looking for a full-time Junior Research Scientist (Research Assistant) and a Research Fellow for one year (with the possibility of renewal).
The researchers will work primarily with Prof Joshua Lewis (NYU), Dr Lucius Caviola (University of Oxford), researchers at Polaris Ventures, and the Effective Altruism Psychology Research Group. Our research studies psychological aspects of relevance to global catastrophic risk and effective altruism. A research agenda is here.
Location: New York University or Remote
Apply nowResearch topics include:
Judgments and decisions about global catastrophic risk from artificial intelligence, pandemics, etc.
The psychology of dangerous actors that could cause large-scale harm, such as malevolent individuals or fanatical and extremist ideological groups
Biases that prevent choosing the most effective options for improving societal well-being, including obstacles to an expanded moral circle
Suggested skills: Applicants for the Junior Research Scientist position ideally have some experience in psychological/behavioral/social science research. Applicants for the Research Fellow position can also come from other fields relevant to studying large-scale harm from dangerous actors.