Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world’s 3k billionaires. What good is an extra one-millionaire? Almost anyone’s comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.
It seems quite wrong to me to present this as so clear-cut. I think if we don’t get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you’re a good fit for the crowd, it might also be a good group to network with.
If you’re particularly optimistic about future funding growth, or pessimistic about community growth, you might think it’s unlikely we end up in that world in a realistic timeframe, but there’s likely to still be some hedging value.
To be clear, I mostly wouldn’t want people in the OP’s situation to drop the PhD to join a hedge fund. But it’s worth understanding that e.g. the main routes to impact in academic research are probably:
Providing leadership for the academic field from within the field, including:
Paradigm-setting
Culture-setting
Helping students orient to what’s important, and providing space for them to work on more important projects
Using academia as a springboard to affect non-academic projects (e.g. being an advisor on particular policy topics, or providing solid support for claims that are broadly useful)
I think for some people those just aren’t going to be a great personal fit (even if they can achieve conventional “success” in academia!), so it’s worth considering other options.
In this particular case, I’m kind of excited about getting more longtermist economists. But it might depend e.g. how disillusioned the OP is with the field as to whether it might make sense for them to be such a person.
Thanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
Note that I think that the mechanisms I describe aren’t specific to economics, but cover academic research generally—and will also include most of how most AI safety researchers (even those not in academia) will have impact.
There are potentially major crux moments around AI, so there’s also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won’t be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they’re more closely connected to the crucial moments—e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn’t have done.
It seems quite wrong to me to present this as so clear-cut. I think if we don’t get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you’re a good fit for the crowd, it might also be a good group to network with.
If you’re particularly optimistic about future funding growth, or pessimistic about community growth, you might think it’s unlikely we end up in that world in a realistic timeframe, but there’s likely to still be some hedging value.
To be clear, I mostly wouldn’t want people in the OP’s situation to drop the PhD to join a hedge fund. But it’s worth understanding that e.g. the main routes to impact in academic research are probably:
Providing leadership for the academic field from within the field, including:
Paradigm-setting
Culture-setting
Helping students orient to what’s important, and providing space for them to work on more important projects
Using academia as a springboard to affect non-academic projects (e.g. being an advisor on particular policy topics, or providing solid support for claims that are broadly useful)
I think for some people those just aren’t going to be a great personal fit (even if they can achieve conventional “success” in academia!), so it’s worth considering other options.
In this particular case, I’m kind of excited about getting more longtermist economists. But it might depend e.g. how disillusioned the OP is with the field as to whether it might make sense for them to be such a person.
Thanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
Note that I think that the mechanisms I describe aren’t specific to economics, but cover academic research generally—and will also include most of how most AI safety researchers (even those not in academia) will have impact.
There are potentially major crux moments around AI, so there’s also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won’t be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they’re more closely connected to the crucial moments—e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn’t have done.