Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define “patient longtermist work” as GPR and distinct from XRR, I don’t see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living at the hinge of history first (which I’d classify as GPR). Does that make sense?
I suppose one other observation is that working on s-risks typically falls within the scope of XRR and clearly also improves the quality of the future, but maybe this ignores your assumption of safely reaching technological maturity.
I think I’ve conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.
(Side note: There are so many possible longtermist strategies! Any combination of (Patient,Urgent)×(Broad,Narrow)×(TrajectoryChange,XRR) is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there’s actually at least six other strategies)
This model completely neglects meta strategic work along the lines of ‘are we at the hinge of history?’ and ‘should we work on XRR or something else?’. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I’m not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
I had s-risks in mind when I caveated it as ‘safely’ reaching technological maturity, and was including s-risk reduction in XRR. But I’m not sure if that’s the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like ‘quality increasing’ than ‘probability increasing’. The argument for them being ‘probability increasing’ is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)
Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.
Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.
Hey Alex. Really interesting post! To have a go at your last question, my intuition is that the spillover effects of GPR on increasing the probability of the future cannot be neglected. I suppose my view differs in that where you define “patient longtermist work” as GPR and distinct from XRR, I don’t see that it has to be. For example, I may believe that XRR is the more impactful cause in the long run, but just believe that I should wait a couple hundred years before putting my resources towards this. Or we should figure out if we are living at the hinge of history first (which I’d classify as GPR). Does that make sense?
I suppose one other observation is that working on s-risks typically falls within the scope of XRR and clearly also improves the quality of the future, but maybe this ignores your assumption of safely reaching technological maturity.
I think I’ve conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.
(Side note: There are so many possible longtermist strategies! Any combination of (Patient,Urgent)×(Broad,Narrow)×(Trajectory Change,XRR) is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there’s actually at least six other strategies)
This model completely neglects meta strategic work along the lines of ‘are we at the hinge of history?’ and ‘should we work on XRR or something else?’. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I’m not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
I had s-risks in mind when I caveated it as ‘safely’ reaching technological maturity, and was including s-risk reduction in XRR. But I’m not sure if that’s the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like ‘quality increasing’ than ‘probability increasing’. The argument for them being ‘probability increasing’ is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)
Re 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality.
Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.