The Philanthropist’s Paradox
TL;DR. Many effective altruists wonder whether it’s better to give now or invest and give later. I’ve realised there is an additional worry for those who (like me) are sceptical of the value of the far future. Roughly, it looks like such people are rationally committed to investing their money and spending it in the future (potentially, at the end of time) even though they don’t think this will do any good and they can see this whole problem coming. I don’t think I’ve seen this mentioned anywhere else, so I thought I’d bring it to light. I don’t have a resolution, which is why I call it a paradox
Setting a familiar scene: should you give now or invest and give later?
You’re thinking about giving money to charity because you want to do some good. Then someone points out to you that, if you invested your money, it would grow over time and therefore you’d be able to do more good overall. You don’t believe in pure time discounting—i.e. you don’t think 1 unit of happiness is morally worth more today than it is tomorrow—so you invest.
As you think about this more, you realise it’s always going to be better to keep growing the money instead of spending it now. You set up a trust that runs after your death and tell the executors of the trust to keeping investing the money until it will do as much good as possible. But when does the money get spent? It seems the money keeps on growing and never gets given away, so your investment ends up doing no good at all. Hence we have the philanthropist’s paradox.
How to resolve the philanthropist’s paradox?
There are lots of practical reasons you might think push you one way or the other: if you don’t give now you’ll never actually make yourself give later; there are better opportunities to give now; you’ll know more later, so it’s better to wait; the Earth might get destroyed, so you should give sooner; and so on. I won’t discuss these as I’m interested in the pure version of the paradox that leads to the conclusion you should give later.[1]
What’s the solution if we ignore the practical concerns? One option is to note that, at some stage, you (or, your executors) will have enough money to solve all the world’s problems. At that point, you should spend it as there’s no value in growing your investment further. This won’t work if the financial costs of solving the world’s problems keeps growing and grows faster than your investment increases. However, if one supposes the universe will eventually end – all the stars will burn out at some point – then you will eventually reach a stage where it’s better to spend the money. If you wait any longer there won’t be any people left. This might not be a very satisfactory response, but then it is called a ‘paradox’ for a reason.
A new twist for those who aren’t convinced about the value of the far future
The above problem implicitly assumed something like totalism, the view on which the best history of the universe is the one with the greatest total of happiness. If you’re totalist, you will care about helping those who wiil potentially exist in millions of years.
However, totalism is not the only view you could take about the value of future people. We might agree with Jan Narveson who stated “we are in favour of making people happy, but neutral about making happy people”[2]. Views of this sort are typically called ‘person-affecting’ (PA) views.
There isn’t a single person-affecting view, but a family of them. I’ll quickly introduce them before explaining the new version of the paradox the face. The three most common person-affecting theories are:
Presentism: the only people who matter are those who presently exist (rather than those who might or will exist in the future)
Actualism: the only people who matters are those who actually, rather than merely possibly, exist (this means future actual people do count)
Necessitarianism: the only people who matter, when deciding between a set of outcomes, are those people who exist in all the outcomes under consideration. This is meant to exclude those whose existence is contingent on outcome of the current decision.
Each of the view captures the intuitive that creating some new person is not good: that person does not presently, actually, or necessarily exist. I won’t try to explain why you might like these views here (but see this footnote if you’re interested).[3]
I should note you could also think the far future doesn’t matter (as much) because you believe in pure time discounting (e.g. 1 unit of happiness next year is morally worth 98% of one unit of happiness this year). Whether you give now or later, if you endorse pure time discounting, just depends on whether the percentage annual increase in your money is higher or lower than the percentage annual decrease in the moral value of the future. I don’t think pure time discounting is particularly plausible, but discussing it is outside the scope of this essay.[4]
The (Person-Affecting) Philanthropist’s Paradox, a tale of foreseeable regret
I’ll come back to other person-affecting views later, but, for now, suppose you’re a presentist philanthropist, which means you just care about benefitting currently existing people, and you found the ‘give later’ argument convincing. What should you do?
Option 1: You could give away your money now in 2017. Say that will bring about 100 units of happiness.
Option 2: You could invest it. Following the logic of the paradox you put the money in trust and it doubles every 50 years or so. Now, after 200 years, in 2217, your investment can do 16 times more good.
We can feel a problem coming. The presentist judges outcomes by how they affect presently existing people. Assuming that no one alive at 2017 is also alive in 200 years, at 2217, nothing that happens at 2217 can count as good or bad from the perspective of a decision made at 2017. So, although we might have thought waiting until 2217 and giving later would do more good, it turns out the presentist should think it does no good at all.
Realising the trap awaiting him, what should the presentist do? What he could do is invest the money for just 50 years before giving it away (assume he’s a young philanthropist). This allows him to double his donation. Let’s assume he can use the money at 2067 to benefit only people who were presently alive at 2017. There is a clearly superior outcome to giving now at 2017 as he has less money than he would do at 2067. Remember, presentists doesn’t entail pure time discounting: a presentist can be neutral about giving someone a painkiller now versus giving a painkiller to that same person in 50 years’ time. Why? that person presently existed at the time when the decision was taken. Hence providing twice as many benefits at 2067 rather than 2017, given they are to the same people, is twice as good.
Yet now we find a new oddness. Suppose those 50 years have passed and the presentist is now about to dole out his investment. The presentist pauses for a moment and thinks “how can I most effectively benefit presently existing people?” He’s at 2067 and there is a whole load of new presently-existing people. They didn’t exist at 2017, it’s true, but the presentist is presently at 2067 and is a making a decision on that basis. Now the presentist finds himself facing exactly the same choice at 2067 that he faced at 2017, whether to give now or give later.
All the same logic applies so he decides, once again, that he should give later. Knowing he won’t live forever he puts the money in a trust and instructs the executors to “most effectively benefit those will presently exist at 2117”. But this situation will recur. Every time he (or rather, his executors) consider whether to give now or give later it will always do more good, on presentism, to invest with a view to giving later. This leads him through a series of decisions that means the money ends up being donated in the far future (at the death of the universe), at which point none of the people who presently existed at 2017 will be alive. Thus, the donation ends up being valueless, whereas if he’d just donated immediately, in 2017, he would have done at least some good.
It’s worth noting the difference between the presentist case and the earlier, totalist one. It might seem strange that the totalist should wait so long until his money gets spent, but at least this counted as doing a lot of good on totalism. Whereas the presentist runs through the same rational process as the totalist, also gives away his money at the end of time, but this counts as doing no good on presentism at all. Further, the presentist could foresee he would choose to later do things he currently (at 2017) considers will have no value. Hence the presentist has an extra problem in this paradox.
What should the presentist do?
One thing the presentist might do is to pre-commit himself to spending him money at some later time. Here, he faces a trade-off. He knows, if he invests the money, it will grow at X% a year. He also knows that the people who presently exist will all eventually die. Say 1% of the Earth’s population who are alive at 2017 will die each year (assume this doesn’t include him; he’s immortal, or something, for our purposes). Hence at 2067 half of them are alive. At 2117 they’ve all died and, from the perspective of 2017, nothing that happens can now good or bad. Let’s assume he works this out at realises the most good he can do is by legally binding himself at 2017 to spend the money he’ll have at 2067.
This seems to solve the problem, but there is something weird about it. When 2067 rolls around, they’ll be new people who will presently exist. He’ll be annoyed at his past self for tying his hands because what he, at 2067, wants to do is invest the money for just a bit longer to help them. I don’t have a better solution that this, but I would welcome someone suggesting one.
We could consider this a reductio ad absurdam against presentism, a fatal problem with presentism that causes us to abandon it. I’m not sure it is – a point I’ll come back to at the end – but it does seem paradoxical.[5] If it is a reductio, isn’t uniquely a problem for presentism either: necessitarianism will face a similar kind of problem. I won’t discuss actualism because, for reasons also not worth getting into here, actualism isn’t action-guiding.[6]
Why necessitarian philanthropists get same problem
Necessitarians think the only people that matter are those who exist in all outcomes under consideration, hence we exclude the people whose existence is contingent on what we do.
As Parfit and others have noted, it looks like nearly any decision will eventually change the identity of all future people.[7] Suppose the necessitarian philanthropist decides not to spend his money now, but to invest it instead. This causes the people who would have benefitted from his money, had he spent it now, to make slightly different decisions. Even tiny decisions will ripple through society, causing different people to meet and conceive children at very slightly different times. As one’s DNA is a necessary condition for your identity, this changes the identities of future people, who become contingent people.
This means the necessitarian has a similar difficulty in valuing the far future as the presentist, albeit it for different reasons. To a presentist there’s no point benefitting those who will live in 10,000 years because such people do not presently exist. To a necessitarian there’s no way you could benefit people in 10,000 years’ time, no matter how hard you try, because whatever you do will change who those people are (hence making them the non-necessary people who don’t matter on the theory).
To illustrate, you want to help the X people, some group of future humans who you know will get squashed by an asteroid that will hit the Earth in 10,000 years’ time. You decide to act – you raise awareness, build a big asteroid-zapping laser, etc. – but your actions effect who gets born, meaning the Y people to be created instead of the original X people. On necessitarianism it’s not good for the X people if they are replaced with the Y people, nor it is good to create the Y people either (it’s never good for anyone to be created).
Hence, given that all actions eventually change all future identities, necessitarians should accept there’s a practical limit to how far in the future they can do good.[8] There might be uncertainty on what this limit is.[9] However, just as the presentist should worry about acting sooner rather than later because the number of presently existing people will dwindle the further from 2017 his money gets used, so the necessitarian will find himself with an effective discount rate (even though he doesn’t engage in pure time discounting): his act to give now or give later causes different people to be born. Hence if he invests for 50 years and then gives that money to a child who, at 2067, is currently aged 10, that child’s existence is presumably contingent on his investing the money. As the necessitarian discounts contingent people, he cannot claim investing and then using that money to benefit a contingent existing child is good. This is analogous to the presentist in 2017 realising there’s no point saving money to give to 10-year old in 2067 because that 10-year-old does not, in 2017, presently exist
What can the necessitarian do to avoid the paradox?
I can think of one more move the necessitarian could make. He could argue his investing the money doesn’t change any identities of future people, so it really is better, on his theory, to invest for it for many years.
This is less helpful that it first appears. If investing makes no difference to who gets born, presumably the necessitarian is now back in the same boat as the totalist: both agree it’s best to keep growing the cash until the end of time. The problem for the necessitarian is one of the things he view seems to commit him to is believing we can’t help people in the far future because anything we do will alter all the identities. He’s in a bind: he can’t simultaneously believe far future people don’t matter and that his investment does the most good if it’s spent in the far future.
All this is to say person-affecting views faces an additional twist to the philanthropist’s paradox. These aren’t to be waved away as theoretical fancies: there are real-world philanthropists who appear to have person-affecting views: they don’t care about the far future and they think it’s better to make people happy, rather than make happy people. If they want to do the most good with their money, this is paradox they should find a principled response to when they consider whether to give nor or give later.
Epilogue: a new reason not to be a person-affecting philanthropist?
Should we give up on person-affecting views because of this paradox? Maybe, but I doubt it. Two thoughts. First, there’s well-established fact that all views in population ethics have weird outcomes. The bar for ‘plausible theories’ is accepted to be pretty low.[10] I can imagine an advocate of presentism or necessitarianism acknowledging this as just another bullet to bite, and this is still, all things considered, the theory he believes it the least-worst one.
Second, it’s not clear to me exactly where the problem lies. I’m unsure if this should be understand of a problem for rationality (making good decisions), axiology (what counts as ‘good’), or maybe the way they are linked.[11] Perhaps what person affecting theories need is an account of why you should (or shouldn’t) be able to foreseeably regret your future decisions.
[1] See this summary for a note on the practical concerns: http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/
[2] Jan Narveson, “Moral Problems of Population,” The Monist, 1973, 62–86.
[3] A key motivation for PA is the person-affecting restriction(PAR): one state of affairs can only be better than another if it better for someone. This is typically combined with existence non-comparativism: existence is neither better nor worse for someone than non-existence. The argument for existence non-comparativism most famously comes from John Broome, who puts it:
...[I]t cannot ever be true that it is better for a person that she lives than that she should never have lived at all. If it were better for a person that she lives than that she should never have lived at all, then if she had never lived at all, that would have been worse for her than if she had lived. But if she had never lived at all, there would have been no her for it to be worse for, so it could not have been worse for her.
I won’t motivate them or critique them further. My objective here is just to indicate a problem for them.
[4] As Greaves put the argument against pure time discounting: “But of course (runs the thought) the value of utility is independent of such locational factors: there is no possible justification for holding that the value of(say) curing someone’s headache, holding fixed her psychology, circumstances and deservingness, depends upon which year it is” From Greaves, H. Discounting and public policy: A survey’. Conditionally accepted at Economics and Philosophy (link: http://users.ox.ac.uk/~mert2255/papers/discounting.pdf)
[5] By ‘paradoxical’ I mean that seeming acceptably premises and seemingly acceptable reasoning leading to seemingly unacceptable conclusions.
[6] How good an outcome is depends on which outcome you choose to bring about, so you can’t know what you should do until you already know what you’re going to do. Actualists might respond this is the best we can do.
[7] See Reasons and Persons (1984).
[8] As an example, if a necessitarian put nuclear missiles on the Moon set to explode in 5,000 years time, and an alien space ships happens to appear in 5,000 years, the necessitarian will admit he’s (unwittingly) made things worse. A presentist will (oddly) claim this is not bad on the assumption the Moon-visitors haven’t yet been born. However, if they Moon-visitors were presently alive when the missiles were put on the moon, the presentist would say the outcome is bad.
[9] For instance, we might think it takes some months or years before me choosing to buy tea rather than coffee at the super-market changes the identities of all future people. If you find the idea actions could change who gets born, ask yourself if you think you would have been born if World War One had never occurred.
[10] For a list of some problems see Hilary Greaves, “Population Axiology,” Philosophy Compass, 2017.
[11] In recent discussion Patrick Kaczmarek informs me I’m absolutely mistaken to think it can problem with decision theory and helpfully suggested the issue might be the bridging principle between one’s axiology and one’s decisions theory.
I simply don’t believe that anyone is really (when it comes down to it) a presentist or a necessitist.
I don’t think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).
More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus’s ship style worries is to shrug and say there isn’t any fact of the matter but the presentist can’t take that line because there are huge moral implications to where we draw the line for them.
Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?
We can make necessitarianism asymmetric: only people who will necessarily exist OR would have negative utility (or less than the average/median utility, etc.) count.
Some prioritarian views, which also introduce some kind of asymmetry between good and bad, might also work.
I’m probably a necessitiarian, and many (most?) people implicitly hold person-affecting views. However, that’s besides the point. I’m neither defending nor evaluating person-affecting views, or indeed any positions in population axiology. As I mentioned, and is widely accepted by philosophers, all the views in population ethics have weird outcomes.
FWIW, and this is unrelated to anything said above, nothing about person-affecting views need rely on person identity. The entity of concern can just be something that is able to feel happiness or unhappiness. This is typically the same line total utilitarians take. What person-affectors and totalism disagree about is whether (for one reason on another) creating new entities is good.
In fact, all the problems you’ve raised for person-affecting views also arise for totalists. To see this, let’s imagine a scenario where a mad scientist is creating a brain inside a body, where the body is being shocked with electricity. Suppose he grows it to a certain size, takes bits out, shrinks it, grows it again, etc. Now the totalist needs to take a stance on how much harm the scientist is doing and draw a line somewhere. The totalist and the person-affector can draw the line in the same place, wherever that is.
Whatever puzzles qualia poses for person-affecting views also apply to totalism (at least, the part of morality concerned with subjective experience).
I’m not sure I understand why this is a paradox.
Ignoring practical concerns, there are basically two effects:
1) If you wait, then you can compound your investment by x%.
2) If you wait, then the world gets wealthier, so the social return per dollar given to charity goes down by y%. (no pure time discounting require)
If x% > y%, then it’s better to wait; and vice versa.
As a first approximation, both x and y are roughly equivalent to real economic growth, so both waiting or giving now are equally good.
To a second order, it can go either way depending on the situation.
It only seems counterintuitive if you can argue that x > y for most of history, so you should never give your money, but I don’t see why you would think that.
And in reality, practical concerns would come in long before this. In practice, you couldn’t ensure money spent long after your death would do much good, so you should very likely give it within your lifetime, or soon after.
More detail on these models: http://globalprioritiesproject.org/2015/02/give-now-or-later/
Hello Mr T.
The paradox only comes about if you think it’s generally true it’s better to invest and give later rather than now. This might not be true for various practical reasons (i.e. the ones you gave), but I wanted to ignore those for the sake of argument, so i could present a new problem. If you think later is generally better than now, and you’re a totalist, it seems like you should try to grow that money until the end of time. This seems somewhat odd: you were planning to invest and do a bit more good tomorrow, now you’re investing for 100,000 years.
If you grant the structure fo the paradox for the totalist, person-affecting views have an additional problem.
Imagine a universe that lasts forever, with zero uncertainty, constant equally good opportunities to turn wealth into utility, and constant high investment returns (say, 20% per time period).
In this scenario you could (mathematically) save your wealth for an infinite number of periods and then donate it, generating infinite utility.
It sounds paradoxical but infinities generally are, and the paradox only exists if you think there’s a sufficient chance that the next period will exist and have opportunities to turn wealth into utility relative to the interest rate—that is, you ‘expect’ an infinitely long lasting universe.
A less counterintuitive approach with the same result would be to save everything with that 20% return and also donate some amount that’s less than 20% of the principal each period. This way each period the principal continues to grow, while each year you give away some amount between 0-20% (non-inclusive) and generate a finite amount of utility. After an infinite number of time periods you have accumulated an infinite principal and also generated infinite utility—just as high an expected value as the ‘save it all for an infinite number of time periods and then donate it’ approach suggested above!
Infinities are weird. :)
I think you and Ben have picked up the part of the problem I wasn’t focusing on. I’m less concerned about the totalist version: I think you can spin a version of the story where you should donate the end of time, and that’s just the best thing you can do.
My point was that, given you accept the totalist philanthropist’s paradox, there’s an additional weirdness for person-affecting views. That’s the bit I found interesting.
Although, I suppose there’s a reframing this that makes the puzzle more curious. Totalists get a paradox where they recognise they should donate at the end of time, and that feels odd. Person-affecting views might think they dodge this problem by denying the value of the far future, but they get another kind of paradox for them.
Yeah not saying anything in contradiction to you, just adding my own two cents on the thing.
How is there anything (i.e. “and then”) after an infinite amount of periods (taking altogether an infinite amount of time)? Are you introducing hyperreals or nonstandard analysis? Are you claiming this is just a possibility (from our ignorance about the nature of time) or a fact, conditional on the universe lasting forever?
I think it’s extremely unlikely that time works this way, but if you’re an EU maximizer and assign some positive probability to this possibility, then, sure, you can get an infinite return in EU. Most likely you’ll get nothing. It’s a lot like Pascal’s wager.
I’m almost certain time doesn’t work this way in our universe! But for the paradox to exist we have to imagine a universe where an infinite amount of time really can pass. I’m not an expert in these expected value paradoxes for different kinds of infinity—might be worth asking Amanda Askell who is.
Either way, the mixed strategy of saving and donating some gives us a way out.
It’s worth pointing out that if time just advances forever, so that your current time is just “T seconds after the starting point”, then it is simultaneously true that:
time is infinite
every instant has a finite past (and an infinite future)
The second point in particular means that even though time is infinite, you still can’t wait an infinite amount of time and then do something. I think that’s what MichaelStJules was getting at.
Your mixed strategy has its own paradox, though – suppose you decide that one strategy is better than another if it “eventually” does more total good – that is, there’s a point in time after which “total amount of good done so far” exceeds that of the other strategy for the rest of eternity. You have to do something like this because it doesn’t usually make sense to ask which strategy achieved the most good “after infinite time” because infinite time never elapses.
Anyway, suppose you have that metric of “eventual winner”. Then your strategy can always be improved by reducing the fraction you donate, because the exponential growth of the investment will eventually outpace the linear reduction in donations. But as soon as you reduce the fraction to zero, you no longer get any gains at all. So you have the odd situation where no fraction is optimal – for any strategy, there is always a better one.
In a context of infinite possible outcomes and infinite possible choice pathways, this actually isn’t that surprising. You might as well be surprised that there’s no largest number. And perhaps that applies just as well to the original philanthropist’s paradox – if you permit yourself an infinite time horizon to invest over, it’s just not surprising that there’s no optimal moment to “cash in”.
As soon as you start actually encoding your beliefs that the time horizon is in fact not infinite, I’m willing to bet you start getting some concrete moments to start paying your fund out, and some reasonable justifications for why those moments were better than any other. To the extent that the conclusion “you should wait until near the end of civilization to donate” is still a counterintuitive one, I claim it’s just because of our (correct) intuition that investing is not always better than donating right now, even in the long run. That’s the argument that Ben Todd and Sanjay made.
If I understood your post correctly, this resolves the paradox:
if you invest the money, you get a return (say of r1%)
if you donate, this is also an investment, which may get a return of (say) r2%
So the give now / give later problem is more or less about estimating which is better out of r1 and r2.
I think of donating as also being an investment because money donated now may (or may not) have an immediate effect, but there should also be knock-on positive impacts trickling on into the future. I.e.
an investment is make-payment-now-and-get-a-series-of-(uncertain)-future-cash-flows
a philanthropic “investment” is make-payment-now-and-get-a-series-of-(uncertain)-future-hedon-flows
If this doesn’t resolve paradox, it may be that I have misunderstood the post
Have just looked through the comments, and I think Ben Todd’s post may be expressing a similar idea to mine
The problem seems essentially the same as Parfit’s Hitchhiker: you must pre-commit to win, but you know that when the time comes to pay/spend, you’ll want to change your mind.
It’s my understanding that in company valuation the discount factor is generally more the opportunity cost than the different value in the future. e.g., you might discount 2% as that’s what you’d get in the bank and then you know the valuation is discounted against a safe investments op cost. Don’t know if that’s new info at all.
For charities I suppose you wouldn’t use interest but how much more the money is valuable to them today vs next year?
Particularly for higher impact charities, I should think their funding momentum is far more valuable than any return their donors could make in EV through investing.