Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.
I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.
Practically speaking, thinking through the surprisingness of being at such an influential time made me think:
Maybe I was asymmetrically assessing evidence about how high x-risk is this century. I think that’s right; e.g. I now don’t think that x-risk from nuclear war is as high as 0.1% this century, and I think that longtermist EAs have sometimes overstated the case in favour.
If we think that there’s high existential risk from, say, war, we should (by default) think that such high risk will continue into the future.
It’s more likely that we’re in a simulation
It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.
Hmm, interesting. It seems to me that your priors cause you to think that the “naive longtermist” story, where we’re in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)
Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.
I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.
Practically speaking, thinking through the surprisingness of being at such an influential time made me think:
Maybe I was asymmetrically assessing evidence about how high x-risk is this century. I think that’s right; e.g. I now don’t think that x-risk from nuclear war is as high as 0.1% this century, and I think that longtermist EAs have sometimes overstated the case in favour.
If we think that there’s high existential risk from, say, war, we should (by default) think that such high risk will continue into the future.
It’s more likely that we’re in a simulation
It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.
Hmm, interesting. It seems to me that your priors cause you to think that the “naive longtermist” story, where we’re in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)