I was very surprised to see that ‘funds being appropriated (or otherwise lost)’ is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.
Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain values for 1-6 years, with long-run drop-off for individuals being lower.
A more relevant but less precise outside view is my intuitions about how long charities which have clear founding values tend to stick to those values after their founders leave. I think of this as ballpark a decade on average, though hopefully we could do better if investing time and money in increasing this.
Perhaps yet more relevant and yet less precise is the history of institutions through the eras which have built themselves around some values which they thought of as non-negotiable (in the same way that we might see impartiality as non-negotiable). For example, religious institutions. My vague, non-historian impression is that, even considering institutions founded with concrete values at their core, very few still had those values†† 100 years later, if they existed in the same form at all.
The thing I’d find most convincing in outweighing these outside views is simply an outline for how EAs can get this institutional value drift thing close to zero. I can imagine such a plan seeming obvious to others, but it currently looks like a potentially intractable problem to me.†††
††† This all becomes fairly simple upon the rise of any technology which would enable permanent lock-in. However, it seems that this would be a time to deploy a lot of resources immediately, so ways to move money into the future at that time seem less helpful. This seems like weak evidence for an unfortunate correlation between hingeyness and ability to move resources into the future.
Sorry - ‘or otherwise lost’ qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.
I think there’s a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:
If you have precise values (e.g. classical utilitarianism) then it’s easier to transmit those values across time—you can write your values down clearly as part of the constitution of the foundation, and it’s easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin’s funds were to ‘to provide loans for apprentices to start their businesses’ (!!)).
If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).
Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)
It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there’s a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn’t lose its value.
Caveat also tricky questions on when ‘value drift’ is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.
Got it. Given the inclusion of (bad) value drift in ‘appropriated (or otherwise lost)’, my previous comment should just be interpreted as providing evidence to counter this claim:
But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year.
[Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed.
It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution’s objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of ‘charities with clear founding values’. For the ‘institutions through the eras’ one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting.
Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor.
This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I’m doubtful about how much progress is possible.
† It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.) †† However, it seems useful to think about a much wider set of mission-driven organisations than foundations because the sample of 100-year-old foundations is tiny.
It seems that this quote predicts a lower rate than there has ever† been before.
Just to make sure I understand—you’re saying that, historically, the chance of funds (that were not intended just to advance mutual self-interest) being appropriated has always been higher than 2% per year?
If so, I’m curious what this is based on. - Do you have specific cases of appropriation in mind? Are you mostly appealing to charities with clear founding values and religious groups, both of which you mention later? [Asking because I feel like I don’t have a good grasp on the probability we’re trying to assess here.]
Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I’m thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised.
I think you make good points, and overall I feel quite sympathetic to the view you expressed. Just one quick thought pushing a bit in the other direction:
†† I’m excluding ‘maximise profits’ as a value!
But perhaps this example is quite relevant? To put it crudely, perhaps we can get away with keeping the value “do the most good” stable. This seems more analogous to “maximize profits” than to any specification of value that refers to a specific content of “doing good” (e.g., food aid to country X, or “abolish factory farming”, or “reduce existential risk”).
More generally, the crucial point seems to be: the content and specifics of values might change, but some of this change might be something we endorse. And perhaps there’s a positive correlation between the likelihood of a change in values and how likely we’d be to agree with it upon reflection. [Exploring this fully seems quite complex both in terms of metaethics and empirical considerations.]
Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here, even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value ‘do the most good’ stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial and egalitarian values stable.
For clarity, I excluded profit maximisation because incentives to pursue this goal seem powerful in a way that might never apply to effective altruism, however broadly it is construed. (The ‘impartial’ part seems especially hard to keep stable.) In particular, profit maximisation does not even need to be propagated: e.g. if a company does some random other stuff for a while, its stakeholders will still have a moderate incentive to maximise profits, so will typically return to doing this. A similar statement is that ‘maximise profits’ is the default state of things. No matter how broad our conception of ‘do the most good’ can be made, it seems likely to lack this property (except for lock-in scenarios).
I was very surprised to see that ‘funds being appropriated (or otherwise lost)’ is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.
Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain values for 1-6 years, with long-run drop-off for individuals being lower.
A more relevant but less precise outside view is my intuitions about how long charities which have clear founding values tend to stick to those values after their founders leave. I think of this as ballpark a decade on average, though hopefully we could do better if investing time and money in increasing this.
Perhaps yet more relevant and yet less precise is the history of institutions through the eras which have built themselves around some values which they thought of as non-negotiable (in the same way that we might see impartiality as non-negotiable). For example, religious institutions. My vague, non-historian impression is that, even considering institutions founded with concrete values at their core, very few still had those values†† 100 years later, if they existed in the same form at all.
The thing I’d find most convincing in outweighing these outside views is simply an outline for how EAs can get this institutional value drift thing close to zero. I can imagine such a plan seeming obvious to others, but it currently looks like a potentially intractable problem to me.†††
† Possible example of acceptable values.
†† I’m excluding ‘maximise profits’ as a value!
††† This all becomes fairly simple upon the rise of any technology which would enable permanent lock-in. However, it seems that this would be a time to deploy a lot of resources immediately, so ways to move money into the future at that time seem less helpful. This seems like weak evidence for an unfortunate correlation between hingeyness and ability to move resources into the future.
Sorry - ‘or otherwise lost’ qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.
I think there’s a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:
If you have precise values (e.g. classical utilitarianism) then it’s easier to transmit those values across time—you can write your values down clearly as part of the constitution of the foundation, and it’s easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin’s funds were to ‘to provide loans for apprentices to start their businesses’ (!!)).
If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).
Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)
It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there’s a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn’t lose its value.
Caveat also tricky questions on when ‘value drift’ is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.
Got it. Given the inclusion of (bad) value drift in ‘appropriated (or otherwise lost)’, my previous comment should just be interpreted as providing evidence to counter this claim:
[Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed.
It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution’s objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of ‘charities with clear founding values’. For the ‘institutions through the eras’ one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting.
Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor.
This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I’m doubtful about how much progress is possible.
† It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.)
†† However, it seems useful to think about a much wider set of mission-driven organisations than foundations because the sample of 100-year-old foundations is tiny.
Just to make sure I understand—you’re saying that, historically, the chance of funds (that were not intended just to advance mutual self-interest) being appropriated has always been higher than 2% per year?
If so, I’m curious what this is based on. - Do you have specific cases of appropriation in mind? Are you mostly appealing to charities with clear founding values and religious groups, both of which you mention later? [Asking because I feel like I don’t have a good grasp on the probability we’re trying to assess here.]
Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I’m thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised.
I think you make good points, and overall I feel quite sympathetic to the view you expressed. Just one quick thought pushing a bit in the other direction:
But perhaps this example is quite relevant? To put it crudely, perhaps we can get away with keeping the value “do the most good” stable. This seems more analogous to “maximize profits” than to any specification of value that refers to a specific content of “doing good” (e.g., food aid to country X, or “abolish factory farming”, or “reduce existential risk”).
More generally, the crucial point seems to be: the content and specifics of values might change, but some of this change might be something we endorse. And perhaps there’s a positive correlation between the likelihood of a change in values and how likely we’d be to agree with it upon reflection. [Exploring this fully seems quite complex both in terms of metaethics and empirical considerations.]
Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here, even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value ‘do the most good’ stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial and egalitarian values stable.
For clarity, I excluded profit maximisation because incentives to pursue this goal seem powerful in a way that might never apply to effective altruism, however broadly it is construed. (The ‘impartial’ part seems especially hard to keep stable.) In particular, profit maximisation does not even need to be propagated: e.g. if a company does some random other stuff for a while, its stakeholders will still have a moderate incentive to maximise profits, so will typically return to doing this. A similar statement is that ‘maximise profits’ is the default state of things. No matter how broad our conception of ‘do the most good’ can be made, it seems likely to lack this property (except for lock-in scenarios).