I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
zdgroff
The U.S. event is the same weekend as the National Animal Rights Conference. Given that animal activism is generally described as one of the four primary cause areas, I’m sad to see this happen.
Beautiful post. Peter Singer has written that EAs use reason over empathy, but I think this post reminds us that while we deliberate with reason, we often need to motivate ourselves through empathy.
Is there an EA mentoring program? I feel like that might address some of the friction between EAs and opportunities.
I’ve observed the last point happening among many people I know, including myself—people absorb and act on EA values while not necessarily remaining in the community. It seems like creating people who act by EA values is more important than creating an EA community per se. I’m not sure how much community building helps bring this about, though. It would be interesting to see more research and thinking on the connection between community building and generating new EAs (a similar debate is going on among animal activists at the moment).
To push back a bit against the fear of multiple movements, it seems like you could have multiple movements that all overlap with EA. for instance, animal rights and global poverty both overlap with EA, as does immigration work and criminal justice, increasingly. This parallels social justice movements where, say, gay rights and women’s rights both overlap with social justice, and I don’t see too much harm coming from this (indeed, many social justice movements have tight alliances). Separation might allow specialization, which could easily be net positive.
I’m thinking of switching a ~200/mo. donation from SCI to GWWC. Do you prefer a lump-sum donation to a monthly one?
Also, it seems your claim is that movement-building charities and effective giving promotion are under-funded relative to direct charities. Any sense of what ratio would be optimal?
I was just curious about the ratio and whether you had given thought to what the optimal level is, not asking for donation-splitting purposes. FWIW, I’ve heard the argument against splitting small donations. I do split them, but admit it’s probably not the best thing to do. I’m just indecisive and get fulfillment out of supporting both causes, so I’ve permitted myself this irrational behavior. My guess is there may be others like me, and maybe there are some who split donations because they think we should split our donations the way we would like overall donations to be split.
It seems like you may have more insight than anyone else on whether you should go into philosophy. If you have a high-impact idea or set of ideas that you think you can contribute, perhaps you can go in. Before MacAskill and Ord, I don’t think many people thought there was an applicable and useful argument to be made in philosophy, but they proved that wrong.
I’ve wondered this myself—could be very high impact given the multiplier effect. On the other hand, in any career you have advocacy and influence potential, and not every department and university will necessarily let you offer this course.
What field are you studying? It seems like biology, but not sure. I’m planning on applying to programs in economics. Interested in cause prioritization and studying the interplay of social networks with economic decisions. Interested in seeing the decision you make.
Collective Action and Individual Impact
The thought you ascribe to most EAs (which I think is very likely accurate) is something like this: if I donate now, it will just mean I divert money from what I think is a highly effective charity to GWWC, and then another EA who would have donated to GWWC later on will instead donate to another charity that EA thinks is highly effective.
So the cost of donating to GWWC seems to be that money goes from your favorite charity to some other EA’s favorite charity, and the benefit is that GWWC spends less time fundraising. Perhaps EAs are just thinking about this wrong—why should we think our favorite charity is that much more effective than some other prospective GWWC donor (or at least so much more effective that it is worth wasting GWWC staff’s time)?
Thanks for your helpful reply. Will take note of this in my response.
Wow, great write up. Would be a good post for GWWC. I’d heard through the grapevine (I work at an organization funded by Gates) that they are having trouble giving away all of their money to good causes and are not giving it away fast enough. It seems most of the reasons given here involve resource constraints that it’s not clear to me they are in practical terms.
I would think winning is likely to depend sharply on cause area, or at least on particular assumptions that are not agreed upon in the EA community, at least if it is to be sufficiently concrete. Most EAs could probably agree that a world where utility is maximized (or some fairly similar metric, or optimization function) is a win. What world will realize this depends on views about the value of nonhuman animals, the value of good vs. bad experiences, and other issues where I’ve seen quite a bit of disagreement in the EA community.
I’ve had a similar shift. One other consideration is that I used to think it was important to spread EA ideas by hanging out with non-EAs primarily, but I’ve come to believe the social influence of other EAs makes me more effective.
I stopped doing a job that was giving me severe anxiety and depression. The job was elementary school teaching, and for what it’s worth, was probably low impact relative to what I’m doing now. But even if it had been a potentially high impact job, I think I would have had a very low (or perhaps negative) impact given how unhappy I was.
Lesson: don’t do a job that makes you miserable.
I usually see MIRI’s goal in its technical agenda is “to ensure that the development of smarter-than-human intelligence has a positive impact on humanity.” Is there any chance of expanding this to include all sentient beings? If not, why not? Given that nonhuman animals vastly outnumber the human ones, I would think the most pressing question for AI is its effect on nonhuman animals rather than on human ones.
- 13 Oct 2016 2:26 UTC; 9 points) 's comment on Ask MIRI Anything (AMA) by (
“From single-celled to pluricellular to multicellular organisms or from hunter-gatherers to the EU, the history of evolutionary forces that resulted in human society is a history where cooperation has emerged at increasingly large scales.”
I often hear claims like this, but I’m not sure these things are really all that analogous beyond the superficial ‘smaller → bigger’ aspect. For instance, there is no intentionality with the single-celled organisms, only chance and natural selection, whereas there seems to be intentionality in the latter example.
While I tend to agree that this is the overall trend, there are numerous counterexamples, such as empires decolonizing, the cession of individual rights by Leviathans, and perhaps even entropy on a physical level. So the deck may be even more stacked against us than we think.
You might want to include a link to some EA sites, books, or articles to clarify that effective altruist has a specific meaning. People not already in the EA movement, especially those who have not encountered it, might not realize the level of intellectual rigor and roughly consequentialist reasoning expected.