Thanks for this very detailed, thoughtful comment, Jason. These are interesting considerations and thought experiments.
There are many particular points I could respond to, but let me focus on giving my most important overall point. I am suspicious of arguments that take a form thatâs this abstract and theoretical, where youâre making an empirical prediction about the thing that would be best to do in practice, but the real data that you use as an input is only the historical CAGR of the S&P 500 and the rest is pure imagination or theory. Arguments like these add or multiply irreducible uncertainties together, which feels a bit like adding or multiplying infinities â in this case, itâs adding or multiplying unknowable numbers.
So, ultimately, this is a form of argument based on intuition, and the intuitions are weaker than the intuitions that support the opposite conclusion.
What if we apply some form of universalizability or rule consequentialism, such that we should follow the general rule that, if everybody (or most people) followed it, the world would be better off? It seems hard to argue that indefinitely delaying philanthropic impact is this sort of rule. For instance, this discussion would not be happening because effective altruism would not exist. And many, many good things in the world would not exist.
You can, of course, argue that we should think about whatâs best on the margin, not about whatâs the best general rule. Okay, sure. But then how do you determine whatâs best on the margin? Should we have 10x more philanthropy focused on immediate disbursements before we allocate resources to patient philanthropy? 100x more? 1000x more? Or maybe we already have 10x, 100x, or 1000x more actively disbursing philanthropy than is optimal. Or maybe, through a strange coincidence, weâre currently right on the margin. How do we determine this? We canât. So, marginal thinking doesnât actually help us here.
Thereâs also a sort of free-riding argument: youâre just waiting for others to do the work to improve the world first so you can make your impact numbers go up in the future, without accounting for their contribution. This is true both in general with investing in stocks over the very long-term and with arguments of the sort that you should wait for LGBT rights to get more popular before advocating for LGBT rights to be more cost-effective to advocate for â this seems like an accounting trick, not a principled argument. If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you arenât increasing the overall cost-effectiveness of the LGBT rights movement, youâre just juicing your own numbers.
By analogy, consider something like environmental impact assessment. These numbers can be juiced depending on what you count or donât count. Whatâs in doubt is not the overall environmental impact of all economic activity, but what credit or blame should be assigned to different economic actors. Would patient philanthropy make the world better off overall? Or would it just increase the credit that could be assigned to some philanthropic foundations on one particular accounting scheme?
In general, we should (hopefully) expect the world getting better to actually reduce the cost-effectiveness of future interventions, e.g. the cost of saving a life will go up as global poverty goes down and the poorest countries experience per capita GDP growth. But you can elicit the opposite result by being very selective about your interventions. You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, youâll wait until theyâre much cheaper. But shouldnât you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
You can always contrive a thought experiment that shows itâs theoretically possible for patient philanthropy to be the best thing to do, but weâre not asking if itâs theoretically possible to be the best, weâre asking if itâs the best in practice. So, we have to consider whatâs realistic as opposed to whatâs possible.
Ultimately, we may not be able to produce a realistic mathematical model that tells us an answer one way the other. We might have to rely on general principles, general wisdom, or general intuition. The main thing recommending patient philanthropy is that we think/âhope the stock market will go up a lot over time. The things recommending against it are:
that we think/âhope the cost-effectiveness of interventions will decline a lot over time
democracy/âequality concerns that are strong enough to ban patient philanthropy
doubt about the viability of patient philanthropy institutions over the long term (I very much doubt the Patient Philanthropy Fund will exist in 100 years)
free-riding concerns
universalizability/ârule consequentialism
concerns about impact laundering/âimpact accounting tricks
an overall resistance to accepting weakly supported ideas that canât easily be tested but require a large practical commitment over existing ideas with a strong empirical track record
(Also, I forgot to mention, we shouldnât be accounting for stock market gains as if itâs âfree moneyâ that dropped from the sky because r > g has troubling implications, but I wonât get into that now.)
This isnât as satisfying as a crisp cost-effectiveness estimate where in the end you get some clear number telling you what to do, but I think itâs the right/âbest way to reason about such things.
If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you arenât increasing the overall cost-effectiveness of the LGBT rights movement, youâre just juicing your own numbers.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I donât think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isnât going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/âor at levels beyond the funderâs ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. Thereâs not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/âor the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
Thereâs also the reality that efforts often decay if there isnât sufficient forward momentumâthat was the intended point of the Pikachu welfare example. Ash doesnât have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the moneyâsay on some field-building, some research grants, some grants to graduate studentsâand the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, youâll wait until theyâre much cheaper. But shouldnât you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
The more neutral-to-positive way to cast free-riding is employing leverage. Iâm really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
Thatâs not an accounting trick in my bookâthere are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those arenât the benefits I care about, and Big Hologram isnât likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is âTo promote the Progress of Science and useful Artâ with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isnât some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcherâs IP during the patent period.
Sometimes for social change, having the older generation die off or otherwise lose power is useful. Thereâs not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/âor the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
I donât agree with this causal model/âexplanatory theory.
This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that canât be steered by human creativity, agency, knowledge, or effort. I donât agree with that view. I think culture is changed by what people decide to do.
Thatâs not an accounting trick in my bookâthere are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those arenât the benefits I care about, and Big Hologram isnât likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
That depends on two things:
If I fund research, then no one else in the future will subsidize the technology and provide it for free.
If I donât fund research, somebody else will.
I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think itâs rare that we ever really know things like that. So, while itâs possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely wonât, what does this tell us about whether this is a good idea in practice?
The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesnât change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on whatâs going to happen if you do nothing. Which is true, I just donât think we can actually know whatâs going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.
It also seems like an oddly binary choice of the sort that doesnât really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Letâs flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?
What seems to be emerging here is an overall theme of: âthe future will happen the way itâs going to happen regardless of what we do about itâ vs. âwe have the agency to change how events play out starting right nowâ. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we canât predict the future.
Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.
Thanks for this very detailed, thoughtful comment, Jason. These are interesting considerations and thought experiments.
There are many particular points I could respond to, but let me focus on giving my most important overall point. I am suspicious of arguments that take a form thatâs this abstract and theoretical, where youâre making an empirical prediction about the thing that would be best to do in practice, but the real data that you use as an input is only the historical CAGR of the S&P 500 and the rest is pure imagination or theory. Arguments like these add or multiply irreducible uncertainties together, which feels a bit like adding or multiplying infinities â in this case, itâs adding or multiplying unknowable numbers.
So, ultimately, this is a form of argument based on intuition, and the intuitions are weaker than the intuitions that support the opposite conclusion.
What if we apply some form of universalizability or rule consequentialism, such that we should follow the general rule that, if everybody (or most people) followed it, the world would be better off? It seems hard to argue that indefinitely delaying philanthropic impact is this sort of rule. For instance, this discussion would not be happening because effective altruism would not exist. And many, many good things in the world would not exist.
You can, of course, argue that we should think about whatâs best on the margin, not about whatâs the best general rule. Okay, sure. But then how do you determine whatâs best on the margin? Should we have 10x more philanthropy focused on immediate disbursements before we allocate resources to patient philanthropy? 100x more? 1000x more? Or maybe we already have 10x, 100x, or 1000x more actively disbursing philanthropy than is optimal. Or maybe, through a strange coincidence, weâre currently right on the margin. How do we determine this? We canât. So, marginal thinking doesnât actually help us here.
Thereâs also a sort of free-riding argument: youâre just waiting for others to do the work to improve the world first so you can make your impact numbers go up in the future, without accounting for their contribution. This is true both in general with investing in stocks over the very long-term and with arguments of the sort that you should wait for LGBT rights to get more popular before advocating for LGBT rights to be more cost-effective to advocate for â this seems like an accounting trick, not a principled argument. If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you arenât increasing the overall cost-effectiveness of the LGBT rights movement, youâre just juicing your own numbers.
By analogy, consider something like environmental impact assessment. These numbers can be juiced depending on what you count or donât count. Whatâs in doubt is not the overall environmental impact of all economic activity, but what credit or blame should be assigned to different economic actors. Would patient philanthropy make the world better off overall? Or would it just increase the credit that could be assigned to some philanthropic foundations on one particular accounting scheme?
In general, we should (hopefully) expect the world getting better to actually reduce the cost-effectiveness of future interventions, e.g. the cost of saving a life will go up as global poverty goes down and the poorest countries experience per capita GDP growth. But you can elicit the opposite result by being very selective about your interventions. You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, youâll wait until theyâre much cheaper. But shouldnât you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
You can always contrive a thought experiment that shows itâs theoretically possible for patient philanthropy to be the best thing to do, but weâre not asking if itâs theoretically possible to be the best, weâre asking if itâs the best in practice. So, we have to consider whatâs realistic as opposed to whatâs possible.
Ultimately, we may not be able to produce a realistic mathematical model that tells us an answer one way the other. We might have to rely on general principles, general wisdom, or general intuition. The main thing recommending patient philanthropy is that we think/âhope the stock market will go up a lot over time. The things recommending against it are:
that we think/âhope the cost-effectiveness of interventions will decline a lot over time
democracy/âequality concerns that are strong enough to ban patient philanthropy
doubt about the viability of patient philanthropy institutions over the long term (I very much doubt the Patient Philanthropy Fund will exist in 100 years)
free-riding concerns
universalizability/ârule consequentialism
concerns about impact laundering/âimpact accounting tricks
an overall resistance to accepting weakly supported ideas that canât easily be tested but require a large practical commitment over existing ideas with a strong empirical track record
(Also, I forgot to mention, we shouldnât be accounting for stock market gains as if itâs âfree moneyâ that dropped from the sky because r > g has troubling implications, but I wonât get into that now.)
This isnât as satisfying as a crisp cost-effectiveness estimate where in the end you get some clear number telling you what to do, but I think itâs the right/âbest way to reason about such things.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I donât think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isnât going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/âor at levels beyond the funderâs ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. Thereâs not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/âor the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
Thereâs also the reality that efforts often decay if there isnât sufficient forward momentumâthat was the intended point of the Pikachu welfare example. Ash doesnât have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the moneyâsay on some field-building, some research grants, some grants to graduate studentsâand the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
The more neutral-to-positive way to cast free-riding is employing leverage. Iâm really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
Thatâs not an accounting trick in my bookâthere are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those arenât the benefits I care about, and Big Hologram isnât likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is âTo promote the Progress of Science and useful Artâ with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isnât some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcherâs IP during the patent period.
I donât agree with this causal model/âexplanatory theory.
This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that canât be steered by human creativity, agency, knowledge, or effort. I donât agree with that view. I think culture is changed by what people decide to do.
That depends on two things:
If I fund research, then no one else in the future will subsidize the technology and provide it for free.
If I donât fund research, somebody else will.
I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think itâs rare that we ever really know things like that. So, while itâs possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely wonât, what does this tell us about whether this is a good idea in practice?
The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesnât change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on whatâs going to happen if you do nothing. Which is true, I just donât think we can actually know whatâs going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.
It also seems like an oddly binary choice of the sort that doesnât really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Letâs flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?
What seems to be emerging here is an overall theme of: âthe future will happen the way itâs going to happen regardless of what we do about itâ vs. âwe have the agency to change how events play out starting right nowâ. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we canât predict the future.
Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.