I do my best at a lot of that speculating in the linked doc, which is why it’s so long, and end up thinking that those considerations probably don’t outweigh the (to my mind) central point about pure time preference and imperfect intergenerational altruism. But they might.
Unfortunately, patient philanthropy is the sort of topic where it seems like what a person thinks about it depends a lot on some combination of a) their intuitions about a few specific things and b) a few fundamental, worldview-level assumptions. I say “unfortunately” because this means disagreements are hard to meaningfully debate.
For instance, there are places where the argument either pro or con depends on what a particular number is, and since we don’t know what that number actually is and can’t find out, the best we can do is make something up. (For example, whether, in what way, and by how much foundations created today will decrease in efficacy over long timespans.)
Many people in the EA community are content to say, e.g., the chance of something is 0.5% rather than 0.05% or 0.005%, and rather than 5% or 50%, simply based on an intuition or intuitive judgment, and then make life-altering, aspirationally world-altering decisions based on that. My approach is more similar to the approach of mainstream academic publishing, in which if you can’t rigorously justify a number, you can’t use it in your argument — it isn’t admissible.
So, this is a deeper epistemological, philosophical, or methodological point.
One piece of evidence that supports my skepticism of numbers derived from intuition is a forecasting exercise where a minor difference in how the question was framed changed the number people gave by 5-6 orders of magnitude (750,000x). And that’s only one minor difference in framing. If different people disagree on multiple major, substantive considerations relevant to deriving a number, perhaps in some cases their numbers could differ by much more. If we can’t agree on whether a crucial number is a million times higher or lower, how constructive are such discussions going to be? Can we meaningfully say we are producing knowledge in such instances?
So, my preferred approach when an argument depends on an unknowable number is to stop the argument right there, or at least slow it down and proceed with caution. And the more of these numbers an argument depends on, the more I think the argument just can’t meaningfully support its conclusion, and, therefore, should not move us to think or act differently.
I do my best at a lot of that speculating in the linked doc, which is why it’s so long, and end up thinking that those considerations probably don’t outweigh the (to my mind) central point about pure time preference and imperfect intergenerational altruism. But they might.
Thanks.
Unfortunately, patient philanthropy is the sort of topic where it seems like what a person thinks about it depends a lot on some combination of a) their intuitions about a few specific things and b) a few fundamental, worldview-level assumptions. I say “unfortunately” because this means disagreements are hard to meaningfully debate.
For instance, there are places where the argument either pro or con depends on what a particular number is, and since we don’t know what that number actually is and can’t find out, the best we can do is make something up. (For example, whether, in what way, and by how much foundations created today will decrease in efficacy over long timespans.)
Many people in the EA community are content to say, e.g., the chance of something is 0.5% rather than 0.05% or 0.005%, and rather than 5% or 50%, simply based on an intuition or intuitive judgment, and then make life-altering, aspirationally world-altering decisions based on that. My approach is more similar to the approach of mainstream academic publishing, in which if you can’t rigorously justify a number, you can’t use it in your argument — it isn’t admissible.
So, this is a deeper epistemological, philosophical, or methodological point.
One piece of evidence that supports my skepticism of numbers derived from intuition is a forecasting exercise where a minor difference in how the question was framed changed the number people gave by 5-6 orders of magnitude (750,000x). And that’s only one minor difference in framing. If different people disagree on multiple major, substantive considerations relevant to deriving a number, perhaps in some cases their numbers could differ by much more. If we can’t agree on whether a crucial number is a million times higher or lower, how constructive are such discussions going to be? Can we meaningfully say we are producing knowledge in such instances?
So, my preferred approach when an argument depends on an unknowable number is to stop the argument right there, or at least slow it down and proceed with caution. And the more of these numbers an argument depends on, the more I think the argument just can’t meaningfully support its conclusion, and, therefore, should not move us to think or act differently.