I’d argue we don’t necessarily know yet whether this is true. It may well be true, but it may well be false.
I think it’s almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).
Also seems almost certainly true that we don’t know for which interventions far future effects account for the bulk of impact.
Separately, I’d wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.
I suspect it’s basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you’re not missing anything crucial).
...the effect of AMF is still net positive.
I share this intuition, but “still net positive” is a long way off from “most cost-effective.”
AMF has received so much scrutiny because it’s a contender for the most cost-effective way to give money – I’m skeptical we can make believable claims about cost-effect when we take the far future into account.
I’m more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.
I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn’t matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I’m just pretty confused about how confused I should be.
I wish there was a way to empirically test this, other than with time travel.
A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)
I think this is the case for a lot of stuff in my friends’ lives as well, and appears to happen a lot in history too.
It’s not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.
It’s surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there’s no signal whatsoever and it’s all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we’ve managed to document regularities in how the world works. It’s true that as you move “up the stack”, say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.
Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.
...you are making a much stronger claim: That there’s no signal whatsoever and it’s all noise. I think this is pretty unlikely.
I’m making the claim that with regard to the far future, it’s mostly noise and very little signal.
I think there’s some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.
But I don’t think most things are like this re: the far future.
I think the signal:noise ratio is much better in other domains.
Humans evolved intelligence because the world has predictable aspects to it.
I don’t know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).
I think it’s almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).
Also seems almost certainly true that we don’t know for which interventions far future effects account for the bulk of impact.
Separately, I’d wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.
I suspect it’s basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you’re not missing anything crucial).
I share this intuition, but “still net positive” is a long way off from “most cost-effective.”
AMF has received so much scrutiny because it’s a contender for the most cost-effective way to give money – I’m skeptical we can make believable claims about cost-effect when we take the far future into account.
I’m more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.
I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn’t matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I’m just pretty confused about how confused I should be.
I wish there was a way to empirically test this, other than with time travel.
A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)
I think this is the case for a lot of stuff in my friends’ lives as well, and appears to happen a lot in history too.
It’s not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.
It’s surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there’s no signal whatsoever and it’s all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we’ve managed to document regularities in how the world works. It’s true that as you move “up the stack”, say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.
Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.
I’m making the claim that with regard to the far future, it’s mostly noise and very little signal.
I think there’s some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.
But I don’t think most things are like this re: the far future.
I think the signal:noise ratio is much better in other domains.
I don’t know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).