Yeah that sounds like simple cluelessness. I still don’t get this point (whereas I like other points you’ve made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you’re not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you’ve given).
Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
Basically, I don’t have enough reason to believe we don’t have evidential symmetry, because the proposed systematic causal effects (even if you separate different kinds of effects or considerations) aren’t quantified, even roughly, with enough justification. You have no reason to believe that the probability that the outcome from action A will be better than x (a deterministic outcome or value) 1000 years from now with a probability p>0 higher than the outcome from action B, for any probability difference p>0 or any x:
EDIT: Also, you can also compare the distributions of outcomes of actions A and B 1000 years from now, and again, I don’t have reason to believe pA1000(x) and pB1000(x) differ by any p>0, for any x, or P[A1000∈X]−P[B1000∈X]>p>0 for any set of outcomes X for any p>0.
Also, even if my EV is 0 and I’m treating it like simple cluelessness, can it not still make sense to try to learn more? Is the value of information under simple cluelessness necessarily 0?
It’s becoming increasingly apparent to me how strong an objection to longtermist interventions this comment is. I’d be very keen to see more engagement with this model.
My own current take: I hold out some hope that our ability to forecast long-term effects, at least under some contingencies within our lifetimes, will be not-terrible enough. And I’m more sympathetic to straightforward EV maximization than you are. But the probability of systematically having a positive long-term impact by choosing any given A over B seems much smaller than longtermists act as if is the case — in particular, it does seem to be in Pascal’s mugging territory.
Yeah that sounds like simple cluelessness. I still don’t get this point (whereas I like other points you’ve made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you’re not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you’ve given).
Basically, I don’t have enough reason to believe we don’t have evidential symmetry, because the proposed systematic causal effects (even if you separate different kinds of effects or considerations) aren’t quantified, even roughly, with enough justification. You have no reason to believe that the probability that the outcome from action A will be better than x (a deterministic outcome or value) 1000 years from now with a probability p>0 higher than the outcome from action B, for any probability difference p>0 or any x:
(Compare to the definition of stochastic dominance. You can replace the strict >‘s with ≥’s, except for p>0.)
So, I assume P[A1000>x]=P[B1000>x] for all x.
EDIT: Also, you can also compare the distributions of outcomes of actions A and B 1000 years from now, and again, I don’t have reason to believe pA1000(x) and pB1000(x) differ by any p>0, for any x, or P[A1000∈X]−P[B1000∈X]>p>0 for any set of outcomes X for any p>0.
Also, even if my EV is 0 and I’m treating it like simple cluelessness, can it not still make sense to try to learn more? Is the value of information under simple cluelessness necessarily 0?
It’s becoming increasingly apparent to me how strong an objection to longtermist interventions this comment is. I’d be very keen to see more engagement with this model.
My own current take: I hold out some hope that our ability to forecast long-term effects, at least under some contingencies within our lifetimes, will be not-terrible enough. And I’m more sympathetic to straightforward EV maximization than you are. But the probability of systematically having a positive long-term impact by choosing any given A over B seems much smaller than longtermists act as if is the case — in particular, it does seem to be in Pascal’s mugging territory.