Sure. But if you change 10^3 into 10^4, we’re still talking the same order of magnitude of cost-effectiveness as GiveDirectly’s cash transfers (depending on how highly consumption should be valued against DALY’s averted, etc.). Even if we assume that a full accounting would show the cost-effectiveness of donations to medical research to be worse than that, what other domestic charities would have a “first-guess cost-effectiveness estimate” even in the same ballpark?
pappubahry
Notes on GiveWell’s recommended charities
If your non-utilitarianism makes you “want to maximally remedy unnecessary human indigence”, and my utilitarianism* makes me want the same, then what is the issue? It seems that at an operational level, we both want the same thing.
It just seems obvious to me that, all other things equal, helping two people is better than helping one. If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.
*I’m not purely utilitarian, but I am when it comes to donating.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
it seriously effects the evaluation of outcomes (i.e. the xrisk community...)
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.
There’s a long comment on this topic by Ryan Carey on the FB group here—basically the policy is carefully curated content of a pretty generalist nature for the first month, so that the forum will be as inclusive as possible to begin with. Then after that first month it gets thrown relatively open.
I also imagine this forum as a place of links and rapid-fire discussion in addition to the longer stuff, and in a few weeks time we’ll get to see if that mode of posting becomes popular.
I have some “I would like a pony”-style feature requests below, so I’ll put my less ambitious suggestion at the start: Randomise the order of the names in the donations list (perhaps allowing an alphabetical sort), re-randomising each time someone visits the page. It’s not going to be a website many of us visit regularly, but it’d be more fun if each time we went there, we saw different people’s donations and plans.
I’m a little underwhelmed by the registry as it stands. The past donations are just page after page of aggregate donation totals and charity lists, presented in some fixed but strange order (ordered by when the people took the survey perhaps?). I’d like to see (while being aware that this is somewhere between annoyingly difficult and not worth the effort):
A sortable and filterable table of past donations
Recent donations (i.e., the ’2013 donations’ or ‘2014 donations’) separated by charity. So, e.g, 1000 USD to AMF, 1000 USD to SCI, 100 USD to CEA.
I’d like to see the donation registry as something you could use to measure where EA donations are going and how much is being donated (only the self-selecting fillers-out-of-surveys people are going to be counted, but it’d still be interesting to me).
With all the free-form text inputs, getting the data cleaned to the point where you could sort/filer it would take a bit of ongoing work (I’d be happy to volunteer to do this data cleaning). And having donation amounts for individual charities means a re-writing of that part of the survey, and probably not as many responses. So I’m not holding my breath! But these are features that are in my imagined ideal EA donations register.
To me, the decision (freely made) to have children is morally neutral—I am not utilitarian on this topic.
Birth rates usually fall substantially as female education levels rise and women become more empowered generally. I would be happier about the world if countries that currently have high birth rates see those birth rates fall thanks to better education levels etc. The sort of drastic fall in birth rates seen in, e.g., South Korea and Iran, are caused by large society-wide changes, and I don’t think it’s likely that as an outside donor I can do anything to help bring about similar society-wide change in, e.g., Nigeria.
But improved access to contraceptives and family planning information help at least some couples choose to have fewer children, and that is something that I would plausibly donate towards. (I don’t know what sort of cost-per-unwanted-birth-averted figure I’d need to prefer a donation to, say, Marie Stopes over a donation to SCI, but it’s something I would carefully consider if I did see those figures.)
I can’t think of any realistic cases where I would pay for extra people to be born.
I was too lazy to specify that I was talking about the world as it is.
A couple might have a third (or first, or...) child, or they might not. I can accept that the two possibilities lead to slightly different total or average utilities, but as I said, I am not utilitarian on this point. I think we just allow people to choose how many children they have, and we build the rest of ethics around that.
I didn’t make an introduction comment in the last post, so I suppose I should do one here. I’m David Barry—one of the migrated posts from the old blog is authored by the user David_Barry, but I signed up my usual Internet handle before thinking about the account that had already been made for me. I live in Perth, where I moved for work earlier this year, having previously lived in Brisbane.
I always used to think I’d become a physicist one day, but what was supposed to be a PhD went badly for too long and I escaped with a Master’s. I’ve now been working in mining geostatistics for almost six years, and donating a chunk of my salary to GiveWell-recommended charities for five years.
I don’t do much actively in EA apart from the donations I send out roughly once a month. Occasionally I’ll knuckle down and work through cost-effectiveness calcu-guestimates, but mostly I just like skimming the EA Facebook group and this forum, occasionally chipping in.
I disagree with a bit of the intro and part one.
You can easily say that Effective Altruism answers a question. The question is, “What should I do with my life?” and the answer is, “As much good as possible (or at least a decent step in that direction).” Only if you take that answer as a starting premise can you then say that EA asks the question, “How do I do the most good?”
Conversely, you can just as easily say that feminism doesn’t ask whether men and women should be equal (that they should be is the starting premise), it asks how society is structurally unequal and how we might re-make society so that it becomes equal.
So I don’t see EA as necessarily in some different category than the (other) ideologies that you list.
In part one, I just… don’t really see a big issue with -ism versus -ist, at least not one any near as large as you’re claiming exists. “Can I [x] and still be a member of the Effective Altruism movement?” seems about as natural a question to ask as “Can I [x] and still be an Effective Altruist?” As long as there’s an EA movement that’s in any way demanding of its followers, it provokes the same sort of questions regardless of whether we call ourselves followers of Effective Altruism or Effective Altruists. Insofar as there’s a problem, I think it’s the “impudence” that you mention of calling this movement Effective Altruism in the first place.
(If someone comes up with a better term for EA followers, I’ll be happy to adopt it—I don’t see it as a big issue. In the meantime I’ll occasionally call myself an “EA” if it makes sense to do so in context.)
Alternative descriptors include “aspiring effective altruist”, “interested in Effective Altruism”, “member of the Effective Altruism movement”… What do you think of those options?
“Aspiring effective altruist” doesn’t describe me: I don’t aspire to anything more than what I’m currently doing, which is donating a decent-sized fraction of my salary to charity. I plateaued in my journey towards an idealised EA several years ago.
“Interested in Effective Altruism” is far too weak.
“Member of the Effective Altruism movement” is something I’d happy to call myself.
- 18 Oct 2014 10:26 UTC; 11 points) 's comment on Effective Altruism is a Question (not an ideology) by (
Pretty passively.… Like I’ll send some money GiveWell’s way later this year to help find effective giving opportunities, but it doesn’t feel inside of me as though I’m aspiring to something here. The GiveWell staff might aspire to find those better giving opportunities; I merely help them a bit and hope that they succeed.
I also think that describing ourselves primarily as having a never-ending aspiration is selling us short if we’re actually achieving stuff.
Thanks for mentioning that you run EA Melbourne—I think this difference in perspective is what’s driving our -ism/-ist disagreement that I talk about in my earlier comment. I’ve never been to an EA meetup group (I moved away from Brisbane earlier in February, missing out by about half a year on the new group that’s just starting there...), and I’d wondered what EA “looked like” in these contexts. If a lot of it is just meeting up every few weeks for a chat about EA-ish topics, then I agree that “effective altruist” is a dubious term if applied to everyone there.
Is it the core idea though? None of the introductions I linked to above mention anything about what one “should” do.
Perhaps a different phrasing would be a little better, but however it’s worded, moral beliefs and/or moral reasoning motivated most of what I see in the EA movement today—totally fundamental to everything, even if it’s not always explicitly stated. Certainly what keeps me sending out donations every month or so is the internal conviction that it’s the right thing to do.
Maybe this is another difference of perspective thing? Like if many of the EA people you see are more passive consumers of EA material, instead of structuring their lives/finances around it, then the fundamental moral motivation of introductions to EA seems absent? I don’t know.
Certainly I find the idea of this (persuade others to do good with their resources) being a core motivating philosophy of my life very off-putting.
I see the core motivating philosophy of my life as trying to do good with my resources. Some no doubt see persuading others as an important part of their resources (I mostly fail at it), but to me EA most fundamentally is about maximising one’s own impact, in whichever ways one can.
The point of that word being there is to reduce the strength of the claim: you’re focused on being effective, you’re trying hard to be effective, but to say that you are effective is different.
I don’t really want to reduce the strength of my claim though[1] -- if I have to be pedantic, I’ll talk about being effective in probabilistic expectation-value terms. If donating to our best guesses of the most cost-effective charities we can find today doesn’t qualify as “effective”, then I don’t think there’s much use in the word, either to describe an -ism or an -ist. It’d be more accurate to call it “hopefully effective altruism”, but I don’t think it’s much of a sacrifice to drop the “hopefully”.
[1] At an emotional level, I have a bit of a I’ve donated a quarter of my salary to the best charities I could find for the last five years, stop trying to take my noun phrase away reaction as well.
I’m not a GWWC member, because I don’t want to lock myself in to a pledge. (I’ve been comfortably over 10% for a few years, and expect that to continue, but I could imagine, e.g., needing expensive medical care in the future and cutting out my donations to pay for that.) For that reason I wouldn’t take the pledge in either its current or its proposed form.
The healthcare thing was just an example (though, despite the FAQ on this topic that Owen brought up below, I would still feel dishonest withdrawing from a pledge for this reason). It’s the lock-in thing that I just don’t feel comfortable with.
I ramped up my donations after discovering GiveWell, and at the time it looked like it cost ~$500 to save a life. Now they reckon it’s roughly ten times that amount. The overwhelming moral case for donating today feels around ten times weaker to me than it did in 2009. If the cost per life saved(-equivalent) rises even further in the coming decade, I might decide that I’m only going to chip in a few percent of my income to MSF, say.
Basically I feel more comfortable donating and being an example of someone who donates to cost-effective charities, rather than publicly pledging.
About a quarter of my donations this year will go to AMF. I’d feel a bit weird holding on to the money instead of donating it.
It’s some kind of balancing act between supporting GiveWell-recommended charities as a way of supporting GiveWell, and recognising that our best guess is that bednets are substantially more cost-effective than deworming/cash transfers. (Pending the forthcoming update....)
Presumably they’ve already factored in the relative strength of bednets.
I don’t think this is relevant to GiveWell’s decision not to recommend AMF.… Immunisations are super-cost-effective, but GiveWell don’t make a recommendation in this area because GAVI or UNICEF or whoever already have committed funding for this.
I’ve got two choices if I want to donate all my donation money this year:
Donate to AMF, which is likely higher impact, but maybe my money won’t be spent for a couple of years.
Donate somewhere else, likely lower impact.
I think an AMF donation looks a pretty decent option here. I would say that the EA-controversial part of my thinking is the insistence on donating all my donation money this year, rather than using a donor-advised fund (to which I say, “Eh, whatevs...”).
AMF is far more likely to need the money soon than GAVI.
My working assumption is that medical research is the most cost-effective domestic charity. My toy model is:
- Disease X kills N people per year
- In expectation, we’ll need M researcher-years to find a cure, costing salaryM dollars
- Currently disease X receives F dollars in funding each year, so in expectation we’ll find the cure in salaryM/F years
- With an extra donation of D dollars, the expectation date of the cure gets brought forward to (salaryM—D)/F years, i.e., bringing it forward by D/F years.
- The earlier cure means that ND/F people who would have died will now live.
Orders of magnitude: 10^7 cancer deaths worldwide annually, 10^10 dollars in funding --> 10^3 dollars per future statistical life saved.
There are enough problems with this toy model not to take it too seriously. I know that GiveWell have thought a lot about medical research, but it’s a complicated thing with commercial interests getting involved in some stages, the question of how important the marginal researcher is, different diseases might have a different number of research “leads”, and so on. The numbers will also change depending on the discount rate.
Still, given the orders of magnitude involved, I think medical research is pretty good impact-per-dollar-wise. Choosing a particular charity then comes down to looking at which diseases are most underfunded relative to their DALY burden, and which charity puts the money into research rather than “awareness raising” or whatever. And the latter issue is definitely one which most people can appreciate.