The scope of what could be considered “patient philanthropy” is pretty broad. My comment doesn’t apply to all potential implementations of the topic.
To start with, I’ll note the distinction between whether society should allow for patient philanthropy and whether it makes sense for a philanthropist who is attempting to maximize their own altruistic goals. For what it is worth, I think there should be some significant limits roughly in line with US law on private foundations, and I would close what I see as some loopholes on public charity status (e.g., that donors can evade the intent of the public-charity rules by donating through a DAF which is technically a public charity, and so counts as public support).
But it’s not logically inconsistent to favor tightening the rules for everyone and to also think that if society chooses not to do so, then I shouldn’t unilaterally disadvantage my preferred cause areas while (e.g.) the LDS church increases its cash hoard.
A Cause Area in Which Yarrow’s Arguments Don’t Work Well for Me
I think some of these arguments depend significantly on what the donor is trying to do. I’m going to pick non-EA cause areas for the examples to keep the focus somewhat abstract (while also concrete enough for examples to work).
Let’s suppose Luna’s preferred cause area is freeing dogs from shelters and giving them loving homes. The rational preference argument doesn’t work here, and I know of no reason to think that the cost of freeing dogs will increase nearly as fast as the rate of return on investments. I also don’t have any clear reason to think that there are shovel-ready interventions today that will have a large enough effect on future shelter populations in 50 years to justify spending a much smaller sum of money now. (Admittedly, I didn’t research either; please do your own research if you are interested in funding canine rescue.)
Luna does face risk from “operational, legal, political, or force majeure” considerations, as well as the risk of technological or social changes making her original goal ineffective or inefficient. But many of these considerations happen over time, meaning that Luna (or her successors) should be able to sense them and start freeing dogs if their risk of disruption over the next 10-20 years gets too high. More broadly, I think this is an answer to some criticisms—the philanthropist doesn’t have to cabin the discretion of the fund to act as circumstances change (although there are also costs to allowing more discretion).
Donors can invest their own money and deploy it when it is most appropriate.
This sounds like patient philanthropy lite—with an implied time limit of the rest of the donor’s life and a restriction on buying/selling assets, both coming from tax considerations. That addresses some valid concerns with patient philanthropy, but we lose the advantage of having the money irrevocably committed to charitable purposes. I’m not sure how to weigh those considerations.
The Anti-PP Argument Calls for Faith in Future Foundations, Donors, and Governments
For the reserve-fund variants of PP: the patient philanthropist may not want to trust other foundations and governments to react strongly enough to future developments. There’s at least some reason to hold such a view, although it may not be enough to justify the practice. I suspect most people think the government generally doesn’t do a great job with its funding priorities (although they would have diverging and even contradictory opinions on why that is the case). I am not particularly impressed by the big foundations that have a wide scope of practice (and thus are potentially flexible). While experience that foundations tend to ossify and become bloated is an argument against patient philanthropy, it also counts as an argument against trusting big foundations to move swiftly and decisively in the face of an emerging threat or opportunity. Still, I think this premise would need to be developed and supported further to update my views on reserve-fund PP.
For other forms of PP: The assertion that the future world should rely on current-income donors, traditional foundations, and/or governments may rest on an assumption that the amount of need / opportunity in a given cause area tracks fairly well with the amount of funding available. If something happens in cause area X and it needs 1-2 orders of magnitude more money this year, will the money be forthcoming in short order? I don’t have a clear opinion on that (and it may depend on the cause area).
Patient Philanthropy May Work Particularly Well in Some Use Cases
Timing: John, in 1900, wants to promote LGBT rights. Deploying his funds in 1900 probably isn’t going to work very well from an effectiveness standpoint. Putting the money away and waiting for the right moment for the cultural winds to shift, and then pumping money to try to sustain/reinforce the wind change, sounds like a more effective strategy. In addition to causes that require cultural shifts, this argument could work for causes that need technological development in a broad sense.
Critical Mass: Ash is passionate about Pikachu welfare and has $1MM to spend. Few people (and few other potential donors) care about Pikachus right now, so the $1MM is unlikely to be enough to catalyze the field of Pikachu welfare studies. I’m writing this bullet point too early in the morning for me to do math, but AI tells me that would be $29.4MM in current dollars in 50 years at a 7% real rate of return. Ash can reasonably think that he has a better shot of creating a self-sustaining field of Pikachu welfare in 2075 than he has today. With several decades to build cash first, the field could survive for much longer on his funding before securing public support and wins that will probably be necessary to find other funding.
Thanks for this very detailed, thoughtful comment, Jason. These are interesting considerations and thought experiments.
There are many particular points I could respond to, but let me focus on giving my most important overall point. I am suspicious of arguments that take a form that’s this abstract and theoretical, where you’re making an empirical prediction about the thing that would be best to do in practice, but the real data that you use as an input is only the historical CAGR of the S&P 500 and the rest is pure imagination or theory. Arguments like these add or multiply irreducible uncertainties together, which feels a bit like adding or multiplying infinities — in this case, it’s adding or multiplying unknowable numbers.
So, ultimately, this is a form of argument based on intuition, and the intuitions are weaker than the intuitions that support the opposite conclusion.
What if we apply some form of universalizability or rule consequentialism, such that we should follow the general rule that, if everybody (or most people) followed it, the world would be better off? It seems hard to argue that indefinitely delaying philanthropic impact is this sort of rule. For instance, this discussion would not be happening because effective altruism would not exist. And many, many good things in the world would not exist.
You can, of course, argue that we should think about what’s best on the margin, not about what’s the best general rule. Okay, sure. But then how do you determine what’s best on the margin? Should we have 10x more philanthropy focused on immediate disbursements before we allocate resources to patient philanthropy? 100x more? 1000x more? Or maybe we already have 10x, 100x, or 1000x more actively disbursing philanthropy than is optimal. Or maybe, through a strange coincidence, we’re currently right on the margin. How do we determine this? We can’t. So, marginal thinking doesn’t actually help us here.
There’s also a sort of free-riding argument: you’re just waiting for others to do the work to improve the world first so you can make your impact numbers go up in the future, without accounting for their contribution. This is true both in general with investing in stocks over the very long-term and with arguments of the sort that you should wait for LGBT rights to get more popular before advocating for LGBT rights to be more cost-effective to advocate for — this seems like an accounting trick, not a principled argument. If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you aren’t increasing the overall cost-effectiveness of the LGBT rights movement, you’re just juicing your own numbers.
By analogy, consider something like environmental impact assessment. These numbers can be juiced depending on what you count or don’t count. What’s in doubt is not the overall environmental impact of all economic activity, but what credit or blame should be assigned to different economic actors. Would patient philanthropy make the world better off overall? Or would it just increase the credit that could be assigned to some philanthropic foundations on one particular accounting scheme?
In general, we should (hopefully) expect the world getting better to actually reduce the cost-effectiveness of future interventions, e.g. the cost of saving a life will go up as global poverty goes down and the poorest countries experience per capita GDP growth. But you can elicit the opposite result by being very selective about your interventions. You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, you’ll wait until they’re much cheaper. But shouldn’t you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
You can always contrive a thought experiment that shows it’s theoretically possible for patient philanthropy to be the best thing to do, but we’re not asking if it’s theoretically possible to be the best, we’re asking if it’s the best in practice. So, we have to consider what’s realistic as opposed to what’s possible.
Ultimately, we may not be able to produce a realistic mathematical model that tells us an answer one way the other. We might have to rely on general principles, general wisdom, or general intuition. The main thing recommending patient philanthropy is that we think/hope the stock market will go up a lot over time. The things recommending against it are:
that we think/hope the cost-effectiveness of interventions will decline a lot over time
democracy/equality concerns that are strong enough to ban patient philanthropy
doubt about the viability of patient philanthropy institutions over the long term (I very much doubt the Patient Philanthropy Fund will exist in 100 years)
free-riding concerns
universalizability/rule consequentialism
concerns about impact laundering/impact accounting tricks
an overall resistance to accepting weakly supported ideas that can’t easily be tested but require a large practical commitment over existing ideas with a strong empirical track record
(Also, I forgot to mention, we shouldn’t be accounting for stock market gains as if it’s “free money” that dropped from the sky because r > g has troubling implications, but I won’t get into that now.)
This isn’t as satisfying as a crisp cost-effectiveness estimate where in the end you get some clear number telling you what to do, but I think it’s the right/best way to reason about such things.
If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you aren’t increasing the overall cost-effectiveness of the LGBT rights movement, you’re just juicing your own numbers.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I don’t think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isn’t going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/or at levels beyond the funder’s ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There’s not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
There’s also the reality that efforts often decay if there isn’t sufficient forward momentum—that was the intended point of the Pikachu welfare example. Ash doesn’t have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the money—say on some field-building, some research grants, some grants to graduate students—and the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, you’ll wait until they’re much cheaper. But shouldn’t you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
The more neutral-to-positive way to cast free-riding is employing leverage. I’m really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
That’s not an accounting trick in my book—there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren’t the benefits I care about, and Big Hologram isn’t likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is “To promote the Progress of Science and useful Art” with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isn’t some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcher’s IP during the patent period.
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There’s not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
I don’t agree with this causal model/explanatory theory.
This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that can’t be steered by human creativity, agency, knowledge, or effort. I don’t agree with that view. I think culture is changed by what people decide to do.
That’s not an accounting trick in my book—there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren’t the benefits I care about, and Big Hologram isn’t likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
That depends on two things:
If I fund research, then no one else in the future will subsidize the technology and provide it for free.
If I don’t fund research, somebody else will.
I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think it’s rare that we ever really know things like that. So, while it’s possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely won’t, what does this tell us about whether this is a good idea in practice?
The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesn’t change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on what’s going to happen if you do nothing. Which is true, I just don’t think we can actually know what’s going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.
It also seems like an oddly binary choice of the sort that doesn’t really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Let’s flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?
What seems to be emerging here is an overall theme of: ‘the future will happen the way it’s going to happen regardless of what we do about it’ vs. ‘we have the agency to change how events play out starting right now’. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we can’t predict the future.
Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.
The scope of what could be considered “patient philanthropy” is pretty broad. My comment doesn’t apply to all potential implementations of the topic.
To start with, I’ll note the distinction between whether society should allow for patient philanthropy and whether it makes sense for a philanthropist who is attempting to maximize their own altruistic goals. For what it is worth, I think there should be some significant limits roughly in line with US law on private foundations, and I would close what I see as some loopholes on public charity status (e.g., that donors can evade the intent of the public-charity rules by donating through a DAF which is technically a public charity, and so counts as public support).
But it’s not logically inconsistent to favor tightening the rules for everyone and to also think that if society chooses not to do so, then I shouldn’t unilaterally disadvantage my preferred cause areas while (e.g.) the LDS church increases its cash hoard.
A Cause Area in Which Yarrow’s Arguments Don’t Work Well for Me
I think some of these arguments depend significantly on what the donor is trying to do. I’m going to pick non-EA cause areas for the examples to keep the focus somewhat abstract (while also concrete enough for examples to work).
Let’s suppose Luna’s preferred cause area is freeing dogs from shelters and giving them loving homes. The rational preference argument doesn’t work here, and I know of no reason to think that the cost of freeing dogs will increase nearly as fast as the rate of return on investments. I also don’t have any clear reason to think that there are shovel-ready interventions today that will have a large enough effect on future shelter populations in 50 years to justify spending a much smaller sum of money now. (Admittedly, I didn’t research either; please do your own research if you are interested in funding canine rescue.)
Luna does face risk from “operational, legal, political, or force majeure” considerations, as well as the risk of technological or social changes making her original goal ineffective or inefficient. But many of these considerations happen over time, meaning that Luna (or her successors) should be able to sense them and start freeing dogs if their risk of disruption over the next 10-20 years gets too high. More broadly, I think this is an answer to some criticisms—the philanthropist doesn’t have to cabin the discretion of the fund to act as circumstances change (although there are also costs to allowing more discretion).
This sounds like patient philanthropy lite—with an implied time limit of the rest of the donor’s life and a restriction on buying/selling assets, both coming from tax considerations. That addresses some valid concerns with patient philanthropy, but we lose the advantage of having the money irrevocably committed to charitable purposes. I’m not sure how to weigh those considerations.
The Anti-PP Argument Calls for Faith in Future Foundations, Donors, and Governments
For the reserve-fund variants of PP: the patient philanthropist may not want to trust other foundations and governments to react strongly enough to future developments. There’s at least some reason to hold such a view, although it may not be enough to justify the practice. I suspect most people think the government generally doesn’t do a great job with its funding priorities (although they would have diverging and even contradictory opinions on why that is the case). I am not particularly impressed by the big foundations that have a wide scope of practice (and thus are potentially flexible). While experience that foundations tend to ossify and become bloated is an argument against patient philanthropy, it also counts as an argument against trusting big foundations to move swiftly and decisively in the face of an emerging threat or opportunity. Still, I think this premise would need to be developed and supported further to update my views on reserve-fund PP.
For other forms of PP: The assertion that the future world should rely on current-income donors, traditional foundations, and/or governments may rest on an assumption that the amount of need / opportunity in a given cause area tracks fairly well with the amount of funding available. If something happens in cause area X and it needs 1-2 orders of magnitude more money this year, will the money be forthcoming in short order? I don’t have a clear opinion on that (and it may depend on the cause area).
Patient Philanthropy May Work Particularly Well in Some Use Cases
Timing: John, in 1900, wants to promote LGBT rights. Deploying his funds in 1900 probably isn’t going to work very well from an effectiveness standpoint. Putting the money away and waiting for the right moment for the cultural winds to shift, and then pumping money to try to sustain/reinforce the wind change, sounds like a more effective strategy. In addition to causes that require cultural shifts, this argument could work for causes that need technological development in a broad sense.
Critical Mass: Ash is passionate about Pikachu welfare and has $1MM to spend. Few people (and few other potential donors) care about Pikachus right now, so the $1MM is unlikely to be enough to catalyze the field of Pikachu welfare studies. I’m writing this bullet point too early in the morning for me to do math, but AI tells me that would be $29.4MM in current dollars in 50 years at a 7% real rate of return. Ash can reasonably think that he has a better shot of creating a self-sustaining field of Pikachu welfare in 2075 than he has today. With several decades to build cash first, the field could survive for much longer on his funding before securing public support and wins that will probably be necessary to find other funding.
Thanks for this very detailed, thoughtful comment, Jason. These are interesting considerations and thought experiments.
There are many particular points I could respond to, but let me focus on giving my most important overall point. I am suspicious of arguments that take a form that’s this abstract and theoretical, where you’re making an empirical prediction about the thing that would be best to do in practice, but the real data that you use as an input is only the historical CAGR of the S&P 500 and the rest is pure imagination or theory. Arguments like these add or multiply irreducible uncertainties together, which feels a bit like adding or multiplying infinities — in this case, it’s adding or multiplying unknowable numbers.
So, ultimately, this is a form of argument based on intuition, and the intuitions are weaker than the intuitions that support the opposite conclusion.
What if we apply some form of universalizability or rule consequentialism, such that we should follow the general rule that, if everybody (or most people) followed it, the world would be better off? It seems hard to argue that indefinitely delaying philanthropic impact is this sort of rule. For instance, this discussion would not be happening because effective altruism would not exist. And many, many good things in the world would not exist.
You can, of course, argue that we should think about what’s best on the margin, not about what’s the best general rule. Okay, sure. But then how do you determine what’s best on the margin? Should we have 10x more philanthropy focused on immediate disbursements before we allocate resources to patient philanthropy? 100x more? 1000x more? Or maybe we already have 10x, 100x, or 1000x more actively disbursing philanthropy than is optimal. Or maybe, through a strange coincidence, we’re currently right on the margin. How do we determine this? We can’t. So, marginal thinking doesn’t actually help us here.
There’s also a sort of free-riding argument: you’re just waiting for others to do the work to improve the world first so you can make your impact numbers go up in the future, without accounting for their contribution. This is true both in general with investing in stocks over the very long-term and with arguments of the sort that you should wait for LGBT rights to get more popular before advocating for LGBT rights to be more cost-effective to advocate for — this seems like an accounting trick, not a principled argument. If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you aren’t increasing the overall cost-effectiveness of the LGBT rights movement, you’re just juicing your own numbers.
By analogy, consider something like environmental impact assessment. These numbers can be juiced depending on what you count or don’t count. What’s in doubt is not the overall environmental impact of all economic activity, but what credit or blame should be assigned to different economic actors. Would patient philanthropy make the world better off overall? Or would it just increase the credit that could be assigned to some philanthropic foundations on one particular accounting scheme?
In general, we should (hopefully) expect the world getting better to actually reduce the cost-effectiveness of future interventions, e.g. the cost of saving a life will go up as global poverty goes down and the poorest countries experience per capita GDP growth. But you can elicit the opposite result by being very selective about your interventions. You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, you’ll wait until they’re much cheaper. But shouldn’t you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
You can always contrive a thought experiment that shows it’s theoretically possible for patient philanthropy to be the best thing to do, but we’re not asking if it’s theoretically possible to be the best, we’re asking if it’s the best in practice. So, we have to consider what’s realistic as opposed to what’s possible.
Ultimately, we may not be able to produce a realistic mathematical model that tells us an answer one way the other. We might have to rely on general principles, general wisdom, or general intuition. The main thing recommending patient philanthropy is that we think/hope the stock market will go up a lot over time. The things recommending against it are:
that we think/hope the cost-effectiveness of interventions will decline a lot over time
democracy/equality concerns that are strong enough to ban patient philanthropy
doubt about the viability of patient philanthropy institutions over the long term (I very much doubt the Patient Philanthropy Fund will exist in 100 years)
free-riding concerns
universalizability/rule consequentialism
concerns about impact laundering/impact accounting tricks
an overall resistance to accepting weakly supported ideas that can’t easily be tested but require a large practical commitment over existing ideas with a strong empirical track record
(Also, I forgot to mention, we shouldn’t be accounting for stock market gains as if it’s “free money” that dropped from the sky because r > g has troubling implications, but I won’t get into that now.)
This isn’t as satisfying as a crisp cost-effectiveness estimate where in the end you get some clear number telling you what to do, but I think it’s the right/best way to reason about such things.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I don’t think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isn’t going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/or at levels beyond the funder’s ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There’s not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
There’s also the reality that efforts often decay if there isn’t sufficient forward momentum—that was the intended point of the Pikachu welfare example. Ash doesn’t have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the money—say on some field-building, some research grants, some grants to graduate students—and the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
The more neutral-to-positive way to cast free-riding is employing leverage. I’m really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
That’s not an accounting trick in my book—there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren’t the benefits I care about, and Big Hologram isn’t likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is “To promote the Progress of Science and useful Art” with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isn’t some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcher’s IP during the patent period.
I don’t agree with this causal model/explanatory theory.
This is some kind of at least partly deterministic theory about culture that says culture is steered by forces that can’t be steered by human creativity, agency, knowledge, or effort. I don’t agree with that view. I think culture is changed by what people decide to do.
That depends on two things:
If I fund research, then no one else in the future will subsidize the technology and provide it for free.
If I don’t fund research, somebody else will.
I guess it could theoretically be true that both assumptions are correct, and maybe we can imagine a scenario where you would have good reasons to believe both of these things, but in practice, in reality, I think it’s rare that we ever really know things like that. So, while it’s possible to imagine scenarios where the upfront money will definitely be supplied by someone else and the down-the-line money definitely won’t, what does this tell us about whether this is a good idea in practice?
The hologram example is making the point: if the pool of dollars required to produce an outcome is a certain amount, the overall cost-effectiveness of producing that outcome doesn’t change regardless of which dollars are yours or not. I think your point is: your marginal cost-effectiveness could be much higher or lower depending on what’s going to happen if you do nothing. Which is true, I just don’t think we can actually know what’s going to happen if you do nothing, and the best version of this still seems to be guesswork or hunches.
It also seems like an oddly binary choice of the sort that doesn’t really exist in real life. If you have significant philanthropic money, can you really not affect what others do? Let’s flip it: if another philanthropist said they would subsidize holograms down the line, that would affect what you would do. So, why not think you have the same power?
What seems to be emerging here is an overall theme of: ‘the future will happen the way it’s going to happen regardless of what we do about it’ vs. ‘we have the agency to change how events play out starting right now’. I definitely believe the latter, I definitely disbelieve the former. We have agency. And, on the other hand, we can’t predict the future.
Who was it who recently quoted someone, maybe the physicist David Deutsch or the psychologist Steven Pinker, saying something like: how terrible would it be if we could predict the future? Because that would mean we had no agency.