That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise—everyone saves the drowning child.
pappubahry
The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists
This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably local region, like the Earth. No-one responds to the drowning child by saying, “well there might be an infinite number of sentient life-forms out there, so it doesn’t matter if the child drowns or I damage my suit”. It is just not a consideration.
So I disagree very strongly with the framing of your post, since the bit I quoted is in the summary. The rest of your post is on the somewhat more reasonable topic of comparing utilities across an infinite number of generations. I don’t really see the use of this (you don’t need a fully developed theory of infinite ethics to justify a carbon tax; considering a handful of generations will do), and don’t see the use of the post on this forum, but I’m open to suggestions of possible applications.
The envelope icon next to “Messages” in the top-right (just below the banner) becomes an open envelope when you have a reply. (I think it turns a brighter shade of blue as well? I can’t remember.) The icon returns to being a closed envelope after you click on it and presumably see what messages/replies you have.
My values align fairly closely with GiveWell’s. If they continue to ask for donations then probably about 20% of my giving next year will go to them (as in the past two years). Apart from that:
GiveWell’s preferred split across their recommended charities AMF/SCI/GD/EvAc (Evidence Action, which includes Deworm the World) is 67/13/13/7. Since most of the reasoning behind that split is how much money each charity could reasonably use, and I agree with GiveWell that bednets are really cost-effective, I won’t be deviating much from GiveWell’s recommendation.
Probably I will reduce GiveDirectly’s prominence down to 5% or so (with increases for SCI and EvAc) -- I haven’t studied GiveWell’s latest numbers for the cost-effectiveness calculation closely, but their headline result has their effectiveness far lower than either deworming or bednets. So I’ll continue donating a relatively small amount to GD in recognition of them being methodologically really great.
I haven’t yet given much thought to GiveWell’s other ‘standout’ charities, and whether it’s correct to donate to them or not.
Relying on hoped-for compounding long-term benefits to make donation decisions is at least not a complete consensus (I certainly don’t).
My understanding of your position is:
Human welfare benefits compound, though we don’t know how much or for how long (and I am dubious, along with one of the commenters, about a compounding model for this).
Animal welfare benefits might compound if they’re caused by human value changes.
In the case of ACE’s recommendations, we have three charities which aim to structurally change human society. So we have short-term benefits which appear much larger than those from human-targeted charities, with possibly compounding and poorly researched long-term benefits, as compared to possibly compounding and poorly researched long-term benefits from human-targeted charities.
I would describe the paragraph of JPB’s that you quote as highly relevant; at the very least it’s useful even if not sufficient information to make a donation decision based on expected impact.
(For the record, I’ve yet to donate to animal welfare charities because I am a horrible speciesist, but I think the animal welfare wing of EA deserves to be much more prominent than it currently is.)
My internal definition is “take a job (or build a business) so that you donate more than you otherwise would have” [1]. It’s too minimalist a definition to work in every case (it’d be unreasonable to call someone on a $1mn salary who donates $1000 “earning to give”, even if they wouldn’t donate anything on $500k), but if you’re the sort of person who considers “how much will I donate to charity” as an input into their choice of job, then I think the definition will work most of the time.
There probably needs to be a threshold amount donated for “earning to give” to be applied in an EA context, but I don’t see the need for a progressive percentage scale for higher-income earners. If you’re giving 10% of $1mn, then you’re doing a lot more than me and my higher percentage of a lot less.
[1] That needs a bit of pedantic re-writing for it to perfectly match what I mean. e.g., I consider myself earning to give because if it wasn’t for my pesky conscience, I’d negotiate a reduced salary for a four-day work week. It’d still be basically the same job, just a different contract… anyway I don’t think this sort of pedantry is important here.
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas.
I’m happy to go with your former definition here (I’m dubious about putting the label ‘altruism’ onto something that’s profit-seeking, but “high-impact good things” are to be encouraged regardless). My objection is that I haven’t seen anyone make a case that these long-term ideas are cost-effective. e.g.,
My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it’s not a clear case either way.
Has anyone tried to make this case, discussing the marginal impact of an extra technology worker? We’d agree that as a whole, scientific and technological progress are enormously important, and underpin the poverty-alleviation work that we’re comparing these longer-term ideas to. But, e.g., if you go into tech and help create a gadget, and in an alternative world some sort of similar gadget gets released a little bit later, what is your impact?
The answer to that last question might be large in expectation-value terms (there’s a small probability of you making a profoundly different sort of transformative gadget), but I’d like to see someone try to plug some numbers in before it becomes the main entry point for Effective Altruism.
Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical.
When Ben wrote “smarter leaders”, I interpreted it as some sort of qualitative change in the politicians we elect—a dream that would involve changing political party structures so that people good at playing internal power games aren’t rewarded, and instead we get a choice of more honest, clever, and dedicated candidates. If, on the other hand, electing smarter leaders it means donating to your preferred party’s or candidate’s get-out-the-vote campaign… well I would like to see the cost-effectiveness estimate.
(Ben might also be referring to EA’s going into politics themselves, and… fair enough. I doubt it’ll apply to more than a small minority of EA’s, but he only spent a small minority of his post writing about it.)
there are many other technocratic policies in the same boat, where you’d expect money to be helpful.
I think this is reasonable, and expectation-value impact estimates should be fairly tractable here, since policy wonks have often done cost-benefit analyses (leaving only the question of how much marginal donated dollars can shift the probability of a policy being enacted).
Overall I still feel like these ideas, as EA ideas, are in an embryonic stage since they lack cost-effectiveness guestimates.
Moderate long-run EA doesn’t look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
you’ll want to make investments in technology
I don’t understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
and economic growth
Who knows how to make economies grow?
This will mean better global institutions, smarter leaders, more social science
What is a “better” global institution, and is there any EA writing on plans to make any such institutions better? (I don’t mean this to come across as entirely critical—I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven’t seen any concrete ideas floated in this direction. Is there a corner of EA websites that I’m completely oblivious to? A Facebook thread that I missed (quite plausible)?)
I have even less idea of how you plan to make better politicians win elections.
More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.
Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say “perhaps” because the most prominent policy idea I see in EA discussions—I might be biased because I agree with and read a lot of it—is open borders, which is decidedly not mainstream.)
But overall I just don’t see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
I haven’t seen a downvote here that I’ve agreed with, and for the moment I’d prefer an only-upvote system. I don’t know where I’d draw the line on where downvoting is acceptable to me (or what guidelines I’d use); I just know I haven’t drawn that line yet.
Yes, I agree with that, and it’s worth someone making that point. But I think in general it is too common a theme in EA discussion to compare some possible altruistic endeavour (here kidney donation) to perfectly optimal behaviour, and then criticise the endeavour as being sub-optimal—Ryan even words it as “causing net harm”!
In reality we’re all sub-optimal, each in our own many ways. If pointing out that kidney donation is sub-optimal (assuming all the arguments really do hold!) nudges some possible kidney donors to actually donate more of their income, then great. But I still think that there are people who would consider donating a kidney but who wouldn’t donate an extra half-month’s salary instead.
How long would it take to create $2k of value? That’s generally 1-2 weeks of work. So if kidney donation makes you lose more than 1-2 weeks of life, and those weeks constitute funds that you would donate, or voluntary contributions that you would make, then it’s a net negative activity for an effective altruist.
This can’t be the right comparison to make if the 1-2 weeks of life is lost decades from now. The (foregone) altruistic opportunities in 2060 are likely to cost much more than $2000 per 15 DALY’s averted.
I think the basic shape of your argument still holds, based on foregone income that you could donate today, but a slightly shorter retirement doesn’t look like it makes much difference to one’s total altruism (especially if you leave donations to charity in your will).
That’s an unfair comparison.
But it might be a relevant comparison for many people. i.e., I expect that there are people who would be willing to forego some income to donate a kidney (and they may not need to do this, depending on the availability of paid medical leave), but who wouldn’t donate all of that income if they kept both kidneys.
I don’t understand what you’re pointing us to in that link. The main part of the text tells us that ties are usually broken in swing states by drawing lots (so if you did a full accounting of probabilities and expectation values, you’d include some factors of 1⁄2, which I think all wash out anyway), and that the probability of a tie in a swing state is around 1 in 10^5.
The second half of the post is Randall doing his usual entertaining thing of describing a ridiculously extreme event. (No-one who argues that a marginal vote is valuable for expectation-value reasons thinks that most of the benefit comes from the possibility of ties in nine states.)
Perhaps some of those details are interesting, but it doesn’t look to me like it changes anything of what’s been debated in this thread.
My main response is that this is worrying about very little—it doesn’t take much time to choose who to vote for once or twice every few years.
But in particular,
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
is an overstated concern at least for the US (relative risk around 1.2 of dying on the road on election day compared to non-election days) and Australia (relative risk around 1.03 +/- error analysis I haven’t done).
That’s OK, even if I had perceived it as an attack, I’ve thought enough about this topic for it not to bother me!
As I said to Peter in our long thread, “Eh whatevs”. :P
I don’t think I can make anything more than a very weak defence of avoiding DAF’s in this situation (the defence would go: “They seem kinda weird from a signalling perspective”). I’m terrible at finance stuff, and a DAF seems like a finance-y thing, and so I avoid them.
Probability that they’ll need my money soon:
GAVI: ~0%
AMF: ~50%
SCI: ~100%
You might say “well there’s a 50-percentage-point difference at each of those two steps” and think I’m being inconsistent in donating to AMF and not GAVI. But if I try some expectation-value-type calculation, I’ll be multiplying the impact of AMF’s work by 50% and getting something comparable to SCI, but getting something close to zero for GAVI.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
Letting the child drown in the hope that
a) there’s an infinite number of life-forms outside our observable universe, and
b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region
strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.