Related thought 1: I think some tension can be defused here by avoiding the framing “should EAs be vegetarian?”, since answering “no” makes it feel like “EAs should not be vegetarian”, when really it seems to me that it just implies that I can’t put any costs incurred in my Altruism Budget, the same as costs I incur by doing other mundane good things.
Daniel_Dewey
Related thought 2: as someone who’s already vegetarian, I think it would be more costly in terms of effort, bad feels, etc. to switch back than to stay veggie or slowly drift back over time.
Related thought 3: Katja’s points about trading inconveniences and displeasures are interesting. Is it good to have a norm that all goods and “currencies” that take part in one’s altruism budget and spending must be tradeable with one another? Is this psychologically realistic?
One reason for thinking that goods in the altruism budget should be tradeable is that in some sense my Altruism Budget is what I call the part of my life where I take the demandingness of ethics seriously. Is this how anyone else thinks about it?
Thanks for the post! Transplanting a comment from the open thread:
Your thoughts about about trading inconveniences and displeasures are interesting. Would it be good to encourage a norm that all goods and “currencies” that take part in one’s altruism activities should be tradeable with one another? Is this psychologically realistic?
A similar set of themes came up in the recent post about kidney donation: “we wouldn’t encourage someone to donate a kidney if it meant they would forego significant donations to GiveWell’s top charities. But we don’t see why that should be the case, since giving a kidney is a complement to and not a replacement for monetary donation.” Would you make roughly the same argument there as you do about trading inconvenience and displeasure?
deliberately choosing to be nasty so as to gain some small amount of fungible resource which can be spent on effective charity
I’m sympathetic to this idea, but I’m not sure when to apply it. For example, if someone comes to my door asking for money for a charity I think is inefficient, am I “deliberately choosing to be nasty” in the way you describe?
The proposed effect is psychological, so presumably the distinction should be psychological—that one shouldn’t do things one feels are nasty?
I don’t think most people really alief that eating meat is nasty; at least, I didn’t until I became vegetarian and internalized those feelings over the course of about a month. Does whether a person aliefs that eating meat is nasty matter to this effect?
You may be right that people overestimate the cost. I’m not sure how to gather data about this.
Re: your second point (“there’s no reason I can fathom...”), how about this lens: view meat as a luxury purchase, like travel, movies, video games, music, etc. Instead of spending on these, you could donate this money, and I can imagine making a similar argument: “there’s no reason I can fathom why you can’t simply try to do less of that...”, but clearly we see foregoing luxuries as a cost of some kind, and don’t think that it’s reasonable to ask EAs to give up all their luxuries. When one does give up luxuries for altruistic reasons, I think it’s fine to try to give up the ones that are subjectively least costly to give up, and that will have the biggest impact.
Other costs: changing your possibly years-long menu for lunch and dinner; feeling hungry for a while if you don’t get it figured out quickly; having red meat cravings (much stronger for some people than others, e.g. not bad for me, but bad for Killian).
I don’t think what I’ve said is a case against vegetarianism; just trying to convey how I think of the costs.
ETA: there are other benefits (and other costs), this is just my subjective slice. An expert review, on which individuals can base their subjective cost breakdowns, would probably be helpful.
There’s this policy report from September 2014, Unprecedented Technological Risks, signed by Beckstead, Bostrom, Bowerman, Cotton-Barratt, MacAskill, Ó hÉigeartaigh, and Ord. Not a long read, but I’d expect the references to be among the best available.
Thanks Tyler!
I’ve often found the EAs around me to be
(i) very supportive of taking on things that are ex ante good ideas, but carry significant risk of failing altogether, and
(ii) good at praising these decisions after they have turned out to fail.
It doesn’t totally remove the sting to have those around you say “Great job taking that risk, it was the right decision and the EV was good!” and really mean it, but I do find that it helps, and it’s a habit I’m trying to build to praise these kinds of things after the fact as much as I praise big successes.
Of course there is some tension; often, if a thing fails to produce value, it’s useful to figure out how we could have anticipated that failure, and why it might not have been the right decision ex ante. Balance, I guess.
Thanks for posting these updates, I’m quite excited about the project!
Have you considered incentive problems stemming from the fact that you require fractions of impact to be allocated among participants so that they add up to 1? My understanding is that this way of allocating credit doesn’t produce the desired results in cases where the project wouldn’t have happened without all participants (see e.g. 5 mistakes of moral reasoning).
If you’ve already answered this, I’d appreciate a link—I know you’ve thought about this quite a bit.
Thanks! This reply makes sense to me, and the refutation of the marginal-contribution strategy is interesting. I can see why you’ve chosen to group tightly complementary contributions.
I am not Nate, but my view (and my interpretation of some median FHI view) is that we should keep options open about those strategies and as-yet unknown other strategies instead of fixating on one at the moment. There’s a lot of uncertainty, and all of the strategies look really hard to achieve. In short, no strongly favored strategy.
FWIW, I also think that most current work in this area, including MIRI’s, promotes the first three of those goals pretty well.
Follow-up: this comment suggests that Nate weakly favors strategies 2 and/or 3 over 1.
Thanks! Going to fix. It was supposed to say “by the time we develop those...”
Thanks! :) After our conversation Owen jumped right into the write-up, and I pitched in with the javascript—it was fun to just charge ahead and execute a small idea like this.
It’s true that this calculator doesn’t take field-steering or paradigm-defining effects of early research into account, nor problems of inherent seriality vs parallelizable work. These might be interesting to incorporate into a future model, at some risk of over-complicating what will always be a pretty rough estimate.
Has anyone here seen any good analyses of helping Syrian refugees as a cause area, or the most effective ways to do it? I’ve seen some commentary on opening borders and some general tips on disaster relief from GiveWell, but not much beyond that. Thanks!
Thanks Alasdair!
Update from GiveWell here, with comments: Donating to help with the Syrian refugee crisis
This is a great article, Michelle! Looking forward very much to the follow-up.
I just read Katja’s post on vegetarianism (recommended). I have also been convinced by arguments (from Beckstead and others) that resources can probably be better spent to influence the long-term future. Have you seen any convincing arguments that vegetarianism or veganism are competitively cost-effective ways of doing good?