Thanks, that’s helpful for thinking about my career (and thanks for asking that question Michael!) Edit: helpful for thinking about my career because I’m thinking about getting economics training, which seems useful for answering specific sub-questions in detail (‘Existential Risk and Economic Growth’ being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
Thank you :) I’ve corrected it
I think I’ve conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.
(Side note: There are so many possible longtermist strategies! Any combination of (Patient,Urgent)×(Broad,Narrow)×(Trajectory Change,XRR) is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there’s actually at least six other strategies)
This model completely neglects meta strategic work along the lines of ‘are we at the hinge of history?’ and ‘should we work on XRR or something else?’. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I’m not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
I had s-risks in mind when I caveated it as ‘safely’ reaching technological maturity, and was including s-risk reduction in XRR. But I’m not sure if that’s the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like ‘quality increasing’ than ‘probability increasing’. The argument for them being ‘probability increasing’ is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)
Thanks for writing this, I like that it’s short and has a section on subjective probability estimates.
What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
Is the main crux for ‘Long-term x-risk matters more than short-term risk’ around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
What do you think about the assumption that ‘efforts can reduce x-risk by an amount proportional to the current risk’? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk.
This is really interesting and I’d like to hear more. Feel free to just answer the easiest questions:Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?
What kinds of specialisation do you think we’d want—subject knowledge? Along different subject lines to academia? Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?
I’d really like to see “If causes differ astronomically in EV, then personal fit in career choice is unimportant”
Thanks for writing this. I’d love to see your napkin math
Thanks for the answer.
Will MacAskill mentioned in this comment that he’d ‘expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.’
You’re a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will’s view, rather than the median FHI view?
Thanks, those look good and I wasn’t aware of them
Yep—the author can click on the image and then drag from the corner to enlarge them (found this difficult to find myself)
It’s pretty blank—something like this
Yeah, that seems right to me.
On doubling consumption though, if you can suggest a policy that increases growth consistently, eventually you might cause consumption to be doubled (at some later time consumption under the faster growth will be twice as much as it would have been with the slower growth). Do you mean you don’t think you could suggest a policy change that would increase the growth rate by much?
Great to hear this has been useful!
I think if γ is around 1 then yes, spreading longtermism probably looks better than accelerating growth. Though I don’t know how expensive it is to double someone’s consumption in the long-run.
Doubling someone’s consumption by just giving them extra money might cost $30,000 for 50 years=~$0.5million. It seems right to me that there are ways to reduce the discount rate that are much cheaper than half a million dollars for 13 basis points. Eg. some community building probably takes a person’s discount rate from around 2% to around 0% for less than half a million dollars.
I don’t know how much cheaper it might be to double someone’s consumption by increasing growth but I suspect that spreading longtermism still looks better for this value of γ.
How confident are you that γ is around 1? I haven’t looked into it and don’t know how much consensus there is.