Once again, I am quite late to the party, but for posterity’s sake, just want to add a few points: First, this is exactly what I do, and it’s just not that hard. Second, I was formery a public interest lawyer (doing impact litigation) and believe the skill set required for that job is very similar to the skill set required for my current job (commercial litigation). Lastly, I am doing what I am doing on the belief that it does the most good—I’ve considered the alternatives! If anyone seriously believes I’m mistaken, I’d very much like to hear from them.
mhpage
I’ve noticed that what “EA is” seems to vary depending on the audience and, specifically, why it is that the audience is not already on board. For example, if one’s objection to EA is that one values local lives over non-local lives, or that effects don’t matter (or are trumped by other considerations), then EA is an ethical framework. But many people are on board with the basic ethical precepts but simply don’t act in accordance with them. For those people, EA seems to be a support group for rejecting cognitive dissonance.
Thanks, Ryan. That’s all very helpful.
(And the MIRI reference was a superintelligent AI joke.)
I’m thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals’ ten-year strategic plan. In a perfect world, one would be able to donate one’s talents (in addition to one’s money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.
Absolutely re personal factors. “Outsource” is an overstatement.
And no, I don’t mean decisions like whether to be a vegetarian (which, as I’ve noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.
I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to −40s—a USAID political appointee; a law firm partner; a data scientist working in the healthcare field—who have decided they are willing to make significant lifestyle changes to better the world. What should they do? This seems to be a very different inquiry than it is for an undergrad. And for some people, a lot turns on it—millions of dollars. Given the amount at stake, it seems like a decision that should be taken just as seriously by the EA community as how an EA organization should spend millions of dollars.
I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this—e.g., accepting others’ EtG money?
In fact, I’d outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I’m primarily motivated by altruistic considerations?
The trade-off argument is right as far as it goes, but that might not be as far as we think: the metaphor of the “will power points” seems problematic. As MichaelDickens and Jess note, many lifestyle changes have initial start-up costs but no ongoing costs. And many things we think will have ongoing costs do not (see, e.g., studies showing more money and more things don’t on average make us happier; conversely, less money and fewer things might not make us less happy). An earning-to-give investment banker might use the trade-off logic to explain why she is not selling her sports car for a Honda Civic, and while that might be right in some cases, I think more often it would be wrong. Point being, it would be a shame if we used the trade-off argument to avoid trying lifestyle changes that, long term, might have no (or small) ongoing costs to our quality of life.
More generally, diet is not a binary choice. Avoid animal products when it’s convenient; don’t when it’s inconvenient. Over time, you might learn it’s not as inconvenient as you thought.
I use the recycling analogy when talking to people about this issue. I consider myself to be one-who-recycles, but if I have bottle in my hand and there’s nowhere convenient to recycle it, I’ll throw it away. Holding onto that bottle all day because I’ve decided I’m a categorical recycler seems kind of silly. I treat food the same way.
Regarding your broader point re consistency, my guess is that we way over-emphasize the effect of diet over other relatively cost-less things we can do to make the world a better place—in large part because there are organized social movements around diet. That of course doesn’t necessarily mean we should eat more animal products but rather that we should try to identify other low-hanging-fruit means of improving the world.
Wonderful essay. Thanks, Jess. A few responses:
(i) It’s not clear to me that the vegan-vegetarian distinction makes sense, as I believe, for example, that consuming eggs or milk can be more harmful (in terms of animal suffering) than certain forms of meat consumption.
(ii) Related to (i) (and to Paul_Christiano’s point re “other ways to make your life worse to make the world better”), other than for signalling/heuristic reasons, I don’t think being categorically vegan/vegetarian is all that important. I believe that reducing animal products in my diet is always a good thing. I also believe that not buying coffee at coffee shops and, instead, donating the money to an animal-welfare organization is always a good thing. But I don’t make the latter a categorical life philosophy. For that reason, I treat my diet just like every other facet of my life: I try to understand the consequences of my actions, identify the ethically ideal direction, and move in that direction wherever I reasonably can, recognizing that I am a deeply imperfect ethical actor.
(iii) Soylent is the solution to all!! It’s now vegan, good for you, cheap, etc. I’d consume it in place of most meals even if I had no regard for animal welfare.
Very interesting, Ben. Thanks for posting.
Here’s an idea indirectly related to your article: The EA community has an incredible amount of intellectual talent. And it is unusual as far as communities go in that everyone’s motive to make money is selfless. For that reason, I am indifferent to whether I make a million dollars all by myself or whether I make it with the help of 40 other people (aside from differences in the initial investment). Given that, isn’t the EA community uniquely positioned to crowd source a business idea, fund that idea with an EA-friendly VC, hire EA types to run the business, and then give the vast majority of the returns (if any) to EA causes? Would it be a good investment for EA Ventures, for example, to organize an entrepreneurial think tank?
I know next to nothing about their methods other than that (i) they’ve been developing them for a long time and (ii) they seem to be effective. The singular importance of recruitment is an unusual quality for a social movement, but it’s one we both have in common.
Should we be paying much more attention to what evangelical religions have done (effectively or ineffectively) to recruit?
I’m a bit late to the party, but a couple quick thoughts from a kidney donor who also does earning to give. First, I used paid vacation time to donate my kidney, so the entire discussion about salary trade-off is inapplicable to me (and I assume would be for most high-earning people). Second, I was working again about five days after surgery. I used those five days to read books I had been planning to read but for which I didn’t have the time.
Another benefit of kidney donation that needs to be taken into account: My understanding is that most forms of kidney failure affect both of a person’s kidneys, meaning whether you have one or two is irrelevant. But because I donated a kidney, if I ever need a kidney, I go to the top of the waiting list. Therefore, I have actually hedged against the risk of (certain types of) kidney failure by donating a kidney.
Thanks for the link, Tom. And yes, I agree that my hypothesis is an indirect answer to the question you posed elsewhere in this thread.
I don’t personally know that many EAs, but I am certainly on the cold side of the emotional spectrum. I am sure there are psych/neuroscience papers on this, but I wouldn’t be surprised if emotional and cognitive empathy can work at cross purposes (see, e.g., trolley hypotheticals), which might be why those who have a lot of the latter have less of the former.
Two questions: (i) do you agree with my hypothesis; and (ii) if so, does it matter?
Non-directed kidney donation seems to be a part of the EA culture, for obvious reasons. Separately, a cornerstone of the EA perspective is that emotional empathy is not enough: cognitive empathy (i.e., reason) should play a critical, even dominant, role in our moral decision-making.
A recent, highly publicized study found that non-directed kidney donors have greater-than-average emotional empathy: “The results of brain scans and behavioral testing suggests that these donors have some structural and functional brain differences that may make them more sensitive, on average, to other people’s distress.” http://www.georgetown.edu/news/abigail-marsh-brain-altruism-study.html
Hypothesis: That doesn’t hold for EA types that have donated kidneys.
Hey Brenton--
I’m printing as I go. For example, if you wanted 100 copies, I’d order a 100-copy print run and have it shipped directly to Australia (that would be $4.01/copy with shipping). There isn’t much of a volume discount for printing, so this appears to be the most economical strategy.
Legally, no one can exchange cash. (Otherwise, everyone would be able to order the book directly from the printer.) That’s why I’m presently footing the bill. I appreciate the tax deduction point, and I am working on a solution. (By that same logic, however, shouldn’t you be giving me all of your to-be-donated cash, and then I’ll donate it on your behalf and get a further tax deduction?)
You can send me your address via email: mhpage@gmail.com
EA Handbook Hard Copies for Local Hubs
Doesn’t every organization/social movement that efficiently allocates resources have diminishing returns beginning with the first dollar? One reason why this could theoretically not be true is if efficient use of capital requires upfront investment in infrastructure, but I don’t know if that applies here. The concept of diminishing returns seems distinct from leverage (though obviously not unrelated).
The signalling issue is complicated, and I’m open to suggestions. As I’m a consequentialist, I’m open simply to lying.
Unless I’m missing something, it seems like we all should be giving to EA advocacy groups until the amount of resources available to those groups reaches the threshold at which the donation is no longer leveraged. What’s the counter-argument? Has there been any analysis on what that threshold level is? In other words, if we take the total resources currently available to groups like Giving What We Can and The Life You Can Save, how many times those resources must we have before I should start donating to the Against Malaria Foundation?
Here’s an EA forum post on the second (Harvard Law) article: http://effective-altruism.com/ea/8f/lawyering_to_give/
Although well-intentioned, I think the Harvard Law article is dangerous. The legal community is potentially pretty low-hanging fruit for EA recruitment: it contains a lot of people who make a lot of money and who generally make misguided but well-intentioned charitable decisions, both regarding how to donate their money and how to use their talents.
Changing the culture of this community will be complicated, however. Early missteps could be extremely costly to the extent they give the community the wrong initial perception of EA-style thinking. In short, the stakes are high, and although I commend those who want to try to make inroads into the community, I suggest treading cautiously.