Nice, thanks for the explanation of your reasoning.
The example I gave there was the same as for simple cluelessness, but it needn’t be. (I like this example because it shows that, even for simple cluelessness, there isn’t wash out.) For example, if we imagine some version of complex cluelessness we can see that ripples on a pond objection doesn’t seem to work. Eg. increased economic growth —> increased carbon emissions —> increased climate change —> migration problems and resources struggles —> great power conflict etc. As time goes on, the world where the extra economic growth happened will look more and more different from the world where it didn’t happen. Does that seem true?
I agree that we don’t know how to predict a bunch of these long-term effects, and this only gets worse as the timescales get longer. But why does that mean we can ignore them? Aren’t we interested in doing the things with the best effects (all the effects)? Does it matter whether we can predict the effects at the moment? Like does GiveWell doing an analysis of AMF mean that there are now better effects from donating to AMF? That doesn’t seem right to me. It does seem more reasonable to donate after the analysis (more subjectively choice-worthy or something like that). But the effects aren’t better, right? Similarly, if there are unpredictable long-term effects, why does it matter (morally*) whether that the effects are unpredictable?
With regards to that EV calculation, I think that might be assuming you have precise credences. If we’re uncertain in our EV estimates, don’t we need to use imprecise credences? Then we’d have a bunch of different term like
EV under model n*credence in model n
*or under whatever value system is motivating you/me eg subjective preferences
I think we should only ignore long-term effects that we can’t reasonably hope to predict (e.g. how conception events are changed). With clean meat research, I see no strong argument why it would affect the views of people 1000 years into the future, at least not in a negative way. So it’s not a dominant consideration for me. And I don’t put much effort into researching these very speculative considerations because I don’t think that I will come to meaningfully strong conclusions even after years of research. I would spend a day or two thinking about long-term effects before donating though.
With AMF or increased economic growth, I do see arguments about how they could negatively affect the future, so I would strongly consider them. When I used to follow GiveWell many years ago, almost all of their reasoning on the economic growth topic was here. I felt that it was not enough and I wanted them to do much more research on it. I don’t know if they did.
Similarly, if there are unpredictable long-term effects, why does it matter (morally*) whether that the effects are unpredictable?
I do think effects matter morally no matter if they are predictable or not. It’s just if all the arguments for the impact on the long-term future are very uncertain and I don’t see how stronger arguments could be made, long-term effects don’t dominate my estimations.
There were parts of your comment that I didn’t manage to understand so I apologize if we are talking past each other.
IIRC, one prominent short-termist EA told me that when they have so little belief in speculative vague arguments that most questions with no clear answers defaults to them to 50-50 split. E.g. they would probably say that they have a 50% credence that clean meat research will have a positive value in 1000 years and a 50% that it will have a negative value, and EV is zero. You can see why they would focus on the short-term stuff. I just thought that this extreme view could be helpful to remembering how short-termist perspective might look like.
That’s basically what I’m describing in my comments here, too. If I can’t estimate the effects of an action on X, I’m thinking of just treating the two distributions for X as identical (although the random variables are different). This is similar to simple cluelessness.
Yeah that sounds like simple cluelessness. I still don’t get this point (whereas I like other points you’ve made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you’re not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you’ve given).
Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
Basically, I don’t have enough reason to believe we don’t have evidential symmetry, because the proposed systematic causal effects (even if you separate different kinds of effects or considerations) aren’t quantified, even roughly, with enough justification. You have no reason to believe that the probability that the outcome from action A will be better than x (a deterministic outcome or value) 1000 years from now with a probability p>0 higher than the outcome from action B, for any probability difference p>0 or any x:
EDIT: Also, you can also compare the distributions of outcomes of actions A and B 1000 years from now, and again, I don’t have reason to believe pA1000(x) and pB1000(x) differ by any p>0, for any x, or P[A1000∈X]−P[B1000∈X]>p>0 for any set of outcomes X for any p>0.
Also, even if my EV is 0 and I’m treating it like simple cluelessness, can it not still make sense to try to learn more? Is the value of information under simple cluelessness necessarily 0?
It’s becoming increasingly apparent to me how strong an objection to longtermist interventions this comment is. I’d be very keen to see more engagement with this model.
My own current take: I hold out some hope that our ability to forecast long-term effects, at least under some contingencies within our lifetimes, will be not-terrible enough. And I’m more sympathetic to straightforward EV maximization than you are. But the probability of systematically having a positive long-term impact by choosing any given A over B seems much smaller than longtermists act as if is the case — in particular, it does seem to be in Pascal’s mugging territory.
Nice, thanks for the explanation of your reasoning.
The example I gave there was the same as for simple cluelessness, but it needn’t be. (I like this example because it shows that, even for simple cluelessness, there isn’t wash out.) For example, if we imagine some version of complex cluelessness we can see that ripples on a pond objection doesn’t seem to work. Eg. increased economic growth —> increased carbon emissions —> increased climate change —> migration problems and resources struggles —> great power conflict etc. As time goes on, the world where the extra economic growth happened will look more and more different from the world where it didn’t happen. Does that seem true?
I agree that we don’t know how to predict a bunch of these long-term effects, and this only gets worse as the timescales get longer. But why does that mean we can ignore them? Aren’t we interested in doing the things with the best effects (all the effects)? Does it matter whether we can predict the effects at the moment? Like does GiveWell doing an analysis of AMF mean that there are now better effects from donating to AMF? That doesn’t seem right to me. It does seem more reasonable to donate after the analysis (more subjectively choice-worthy or something like that). But the effects aren’t better, right? Similarly, if there are unpredictable long-term effects, why does it matter (morally*) whether that the effects are unpredictable?
With regards to that EV calculation, I think that might be assuming you have precise credences. If we’re uncertain in our EV estimates, don’t we need to use imprecise credences? Then we’d have a bunch of different term like
EV under model n*credence in model n
*or under whatever value system is motivating you/me eg subjective preferences
I think we should only ignore long-term effects that we can’t reasonably hope to predict (e.g. how conception events are changed). With clean meat research, I see no strong argument why it would affect the views of people 1000 years into the future, at least not in a negative way. So it’s not a dominant consideration for me. And I don’t put much effort into researching these very speculative considerations because I don’t think that I will come to meaningfully strong conclusions even after years of research. I would spend a day or two thinking about long-term effects before donating though.
With AMF or increased economic growth, I do see arguments about how they could negatively affect the future, so I would strongly consider them. When I used to follow GiveWell many years ago, almost all of their reasoning on the economic growth topic was here. I felt that it was not enough and I wanted them to do much more research on it. I don’t know if they did.
I do think effects matter morally no matter if they are predictable or not. It’s just if all the arguments for the impact on the long-term future are very uncertain and I don’t see how stronger arguments could be made, long-term effects don’t dominate my estimations.
There were parts of your comment that I didn’t manage to understand so I apologize if we are talking past each other.
IIRC, one prominent short-termist EA told me that when they have so little belief in speculative vague arguments that most questions with no clear answers defaults to them to 50-50 split. E.g. they would probably say that they have a 50% credence that clean meat research will have a positive value in 1000 years and a 50% that it will have a negative value, and EV is zero. You can see why they would focus on the short-term stuff. I just thought that this extreme view could be helpful to remembering how short-termist perspective might look like.
That’s basically what I’m describing in my comments here, too. If I can’t estimate the effects of an action on X, I’m thinking of just treating the two distributions for X as identical (although the random variables are different). This is similar to simple cluelessness.
Yeah that sounds like simple cluelessness. I still don’t get this point (whereas I like other points you’ve made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you’re not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you’ve given).
Basically, I don’t have enough reason to believe we don’t have evidential symmetry, because the proposed systematic causal effects (even if you separate different kinds of effects or considerations) aren’t quantified, even roughly, with enough justification. You have no reason to believe that the probability that the outcome from action A will be better than x (a deterministic outcome or value) 1000 years from now with a probability p>0 higher than the outcome from action B, for any probability difference p>0 or any x:
(Compare to the definition of stochastic dominance. You can replace the strict >‘s with ≥’s, except for p>0.)
So, I assume P[A1000>x]=P[B1000>x] for all x.
EDIT: Also, you can also compare the distributions of outcomes of actions A and B 1000 years from now, and again, I don’t have reason to believe pA1000(x) and pB1000(x) differ by any p>0, for any x, or P[A1000∈X]−P[B1000∈X]>p>0 for any set of outcomes X for any p>0.
Also, even if my EV is 0 and I’m treating it like simple cluelessness, can it not still make sense to try to learn more? Is the value of information under simple cluelessness necessarily 0?
It’s becoming increasingly apparent to me how strong an objection to longtermist interventions this comment is. I’d be very keen to see more engagement with this model.
My own current take: I hold out some hope that our ability to forecast long-term effects, at least under some contingencies within our lifetimes, will be not-terrible enough. And I’m more sympathetic to straightforward EV maximization than you are. But the probability of systematically having a positive long-term impact by choosing any given A over B seems much smaller than longtermists act as if is the case — in particular, it does seem to be in Pascal’s mugging territory.