There are many arguments one can make for spending more or less quickly, and that’s fine, but since this post doesn’t respond to my own argument in any sense, I’ll just flag that you can find it here, if anyone’s still interested!
The core of the argument is in Section 2. The core assumption it relies on is that our beneficiaries have a positive rate of pure time preference and/or imperfect intergenerational altruism. So the argument is essentially a reply to the “rational preference” argument presented here: I’d say we should do what’s best for people and their descendants, which is to be more patient than they prefer. If it’s true that it’s cheaper to save a life in some country today than in 100 years, in present value terms, that is a case of the inefficiency discussed in Section 2.6.
The argument is entirely compatible with
there being a significant risk of expropriation each year,
r being less than g sometimes, and
it being better to give now than to wait 100 years in particular. (Since the argument just implies that, given a positive rate of pure time preference and/or imperfect intergenerational altruism, there is probably some future time when it is better to give, until a large share of total funding for the beneficiaries is being allocated patiently.)
I’m only giving this topic a very cursory treatment, so I apologize for that.
I wrote this post quickly without much effort or research, and it’s just intended as a casual forum post, not anything approaching the level of an academic paper.
I’m not sure whether you’re content to make a narrow, technical, abstract point — that’s fine if so, but not what I intended to discuss here — or whether you’re trying to make a full argument that patient philanthropy is something we should actually do in practice. The latter sort of argument (which is what I wanted to address in this post) opens up a lot of considerations that the former does not.
There are many things that can’t be meaningfully modelled with real data, such as:
What’s the probability that patient philanthropy will be outlawed even in countries like England if patient philanthropic foundations try to use it to accumulate as much wealth and power as simple extrapolation implies? (My guess: ~100%.)
What’s the probability that patient philanthropy, if it’s not outlawed, would eventually contribute significantly to repugnant, evil outcomes like illiberalism, authoritarianism, plutocracy, oligarchy, and so on? (My guess: ~100%. So, patient philanthropy should be considered a catastrophic risk in any countries where it is adopted.)
What’s the risk of patient philanthropic foundations based in Western, developed countries like England holding money on behalf of recipients in developing countries such as in sub-Saharan Africa doing a worse job than if those same foundations or some equivalent or counterpart or substitute institution or intervention were based in the recipient countries? And with majority control by people from the recipient countries? (My guess: the risk is high enough that it’s preferable to move the money from the donor countries to the recipient countries from the outset.)
How much do we value things like freedom, autonomy, equality, empowerment, democracy, non-paternalism, and so on? How much do we value them on consequentialist grounds? Do we value them at all on non-consequentialist grounds? How does the importance of these considerations compare to the importance of other measures of impact such as the cost per life saved or the cost per QALY or DALY or similar measures? (My opinion: even just on consequentialist grounds alone, there are incredibly strong reasons to value these things, such that narrow cost-effectiveness calculations of the GiveWell style can’t hope to capture the full picture of what’s important.)
Under what assumptions about the future does the case for patient philanthropy break down? E.g., what do you have to assume about AGI or transformative AI? What do you have to assume about economic development in poor countries? Etc. (And how should we handle the uncertainty around this?)
What difference do philosophical assumptions make, such as a more deterministic view of history versus a view that places much greater emphasis on the agency, responsibility, and power of individuals and organizations? (My hunch: the latter makes certain arguments one might make for doing patient philanthropy in practice less attractive.)
These questions might all be irrelevant to what you want to say about patient philanthropy, but I think they are the sort of questions we have to consider if we are wondering about whether to actually do patient philanthropy in practice.
I was more hopeful when I wrote this post that it would be possible to talk meaningfully about patient philanthropy in a more narrow, technical, abstract way, but after discussing it with Jason and others, I realize that the possibility space is far too large to do that — we end up essentially discussing anything that anyone imagines might plausibly happen in the distant future, as well as fundamental differences in worldviews — and it’s impossible to avoid messier, less elegant arguments, including highly uncertain speculation about future scenarios, and including arguments of a philosophical, moral, social, and political nature.
I want to clarify I wasn’t trying to respond directly to your work or do it justice; rather, I was trying to address a more general question about whether we should actually do patient philanthropy in practice, all things considered. I cited you as the originator of patient philanthropy because it’s important to cite where ideas come from, but I hope I didn’t give readers the impression I was trying to represent your work well or give it a fair shake. I was not really doing that, I was just using it as a jumping-off point for a broader discussion. I apologize if I didn’t make that clear enough in the post, and could maybe edit it if that needs to be made clearer.
I do my best at a lot of that speculating in the linked doc, which is why it’s so long, and end up thinking that those considerations probably don’t outweigh the (to my mind) central point about pure time preference and imperfect intergenerational altruism. But they might.
Unfortunately, patient philanthropy is the sort of topic where it seems like what a person thinks about it depends a lot on some combination of a) their intuitions about a few specific things and b) a few fundamental, worldview-level assumptions. I say “unfortunately” because this means disagreements are hard to meaningfully debate.
For instance, there are places where the argument either pro or con depends on what a particular number is, and since we don’t know what that number actually is and can’t find out, the best we can do is make something up. (For example, whether, in what way, and by how much foundations created today will decrease in efficacy over long timespans.)
Many people in the EA community are content to say, e.g., the chance of something is 0.5% rather than 0.05% or 0.005%, and rather than 5% or 50%, simply based on an intuition or intuitive judgment, and then make life-altering, aspirationally world-altering decisions based on that. My approach is more similar to the approach of mainstream academic publishing, in which if you can’t rigorously justify a number, you can’t use it in your argument — it isn’t admissible.
So, this is a deeper epistemological, philosophical, or methodological point.
One piece of evidence that supports my skepticism of numbers derived from intuition is a forecasting exercise where a minor difference in how the question was framed changed the number people gave by 5-6 orders of magnitude (750,000x). And that’s only one minor difference in framing. If different people disagree on multiple major, substantive considerations relevant to deriving a number, perhaps in some cases their numbers could differ by much more. If we can’t agree on whether a crucial number is a million times higher or lower, how constructive are such discussions going to be? Can we meaningfully say we are producing knowledge in such instances?
So, my preferred approach when an argument depends on an unknowable number is to stop the argument right there, or at least slow it down and proceed with caution. And the more of these numbers an argument depends on, the more I think the argument just can’t meaningfully support its conclusion, and, therefore, should not move us to think or act differently.
There are many arguments one can make for spending more or less quickly, and that’s fine, but since this post doesn’t respond to my own argument in any sense, I’ll just flag that you can find it here, if anyone’s still interested!
The core of the argument is in Section 2. The core assumption it relies on is that our beneficiaries have a positive rate of pure time preference and/or imperfect intergenerational altruism. So the argument is essentially a reply to the “rational preference” argument presented here: I’d say we should do what’s best for people and their descendants, which is to be more patient than they prefer. If it’s true that it’s cheaper to save a life in some country today than in 100 years, in present value terms, that is a case of the inefficiency discussed in Section 2.6.
The argument is entirely compatible with
there being a significant risk of expropriation each year,
r being less than g sometimes, and
it being better to give now than to wait 100 years in particular. (Since the argument just implies that, given a positive rate of pure time preference and/or imperfect intergenerational altruism, there is probably some future time when it is better to give, until a large share of total funding for the beneficiaries is being allocated patiently.)
I’m only giving this topic a very cursory treatment, so I apologize for that.
I wrote this post quickly without much effort or research, and it’s just intended as a casual forum post, not anything approaching the level of an academic paper.
I’m not sure whether you’re content to make a narrow, technical, abstract point — that’s fine if so, but not what I intended to discuss here — or whether you’re trying to make a full argument that patient philanthropy is something we should actually do in practice. The latter sort of argument (which is what I wanted to address in this post) opens up a lot of considerations that the former does not.
There are many things that can’t be meaningfully modelled with real data, such as:
What’s the probability that patient philanthropy will be outlawed even in countries like England if patient philanthropic foundations try to use it to accumulate as much wealth and power as simple extrapolation implies? (My guess: ~100%.)
What’s the probability that patient philanthropy, if it’s not outlawed, would eventually contribute significantly to repugnant, evil outcomes like illiberalism, authoritarianism, plutocracy, oligarchy, and so on? (My guess: ~100%. So, patient philanthropy should be considered a catastrophic risk in any countries where it is adopted.)
What’s the risk of patient philanthropic foundations based in Western, developed countries like England holding money on behalf of recipients in developing countries such as in sub-Saharan Africa doing a worse job than if those same foundations or some equivalent or counterpart or substitute institution or intervention were based in the recipient countries? And with majority control by people from the recipient countries? (My guess: the risk is high enough that it’s preferable to move the money from the donor countries to the recipient countries from the outset.)
How much do we value things like freedom, autonomy, equality, empowerment, democracy, non-paternalism, and so on? How much do we value them on consequentialist grounds? Do we value them at all on non-consequentialist grounds? How does the importance of these considerations compare to the importance of other measures of impact such as the cost per life saved or the cost per QALY or DALY or similar measures? (My opinion: even just on consequentialist grounds alone, there are incredibly strong reasons to value these things, such that narrow cost-effectiveness calculations of the GiveWell style can’t hope to capture the full picture of what’s important.)
Under what assumptions about the future does the case for patient philanthropy break down? E.g., what do you have to assume about AGI or transformative AI? What do you have to assume about economic development in poor countries? Etc. (And how should we handle the uncertainty around this?)
What difference do philosophical assumptions make, such as a more deterministic view of history versus a view that places much greater emphasis on the agency, responsibility, and power of individuals and organizations? (My hunch: the latter makes certain arguments one might make for doing patient philanthropy in practice less attractive.)
These questions might all be irrelevant to what you want to say about patient philanthropy, but I think they are the sort of questions we have to consider if we are wondering about whether to actually do patient philanthropy in practice.
I was more hopeful when I wrote this post that it would be possible to talk meaningfully about patient philanthropy in a more narrow, technical, abstract way, but after discussing it with Jason and others, I realize that the possibility space is far too large to do that — we end up essentially discussing anything that anyone imagines might plausibly happen in the distant future, as well as fundamental differences in worldviews — and it’s impossible to avoid messier, less elegant arguments, including highly uncertain speculation about future scenarios, and including arguments of a philosophical, moral, social, and political nature.
I want to clarify I wasn’t trying to respond directly to your work or do it justice; rather, I was trying to address a more general question about whether we should actually do patient philanthropy in practice, all things considered. I cited you as the originator of patient philanthropy because it’s important to cite where ideas come from, but I hope I didn’t give readers the impression I was trying to represent your work well or give it a fair shake. I was not really doing that, I was just using it as a jumping-off point for a broader discussion. I apologize if I didn’t make that clear enough in the post, and could maybe edit it if that needs to be made clearer.
I do my best at a lot of that speculating in the linked doc, which is why it’s so long, and end up thinking that those considerations probably don’t outweigh the (to my mind) central point about pure time preference and imperfect intergenerational altruism. But they might.
Thanks.
Unfortunately, patient philanthropy is the sort of topic where it seems like what a person thinks about it depends a lot on some combination of a) their intuitions about a few specific things and b) a few fundamental, worldview-level assumptions. I say “unfortunately” because this means disagreements are hard to meaningfully debate.
For instance, there are places where the argument either pro or con depends on what a particular number is, and since we don’t know what that number actually is and can’t find out, the best we can do is make something up. (For example, whether, in what way, and by how much foundations created today will decrease in efficacy over long timespans.)
Many people in the EA community are content to say, e.g., the chance of something is 0.5% rather than 0.05% or 0.005%, and rather than 5% or 50%, simply based on an intuition or intuitive judgment, and then make life-altering, aspirationally world-altering decisions based on that. My approach is more similar to the approach of mainstream academic publishing, in which if you can’t rigorously justify a number, you can’t use it in your argument — it isn’t admissible.
So, this is a deeper epistemological, philosophical, or methodological point.
One piece of evidence that supports my skepticism of numbers derived from intuition is a forecasting exercise where a minor difference in how the question was framed changed the number people gave by 5-6 orders of magnitude (750,000x). And that’s only one minor difference in framing. If different people disagree on multiple major, substantive considerations relevant to deriving a number, perhaps in some cases their numbers could differ by much more. If we can’t agree on whether a crucial number is a million times higher or lower, how constructive are such discussions going to be? Can we meaningfully say we are producing knowledge in such instances?
So, my preferred approach when an argument depends on an unknowable number is to stop the argument right there, or at least slow it down and proceed with caution. And the more of these numbers an argument depends on, the more I think the argument just can’t meaningfully support its conclusion, and, therefore, should not move us to think or act differently.