I’m a third year college student at a top US university studying math and computer science. I’m struggling to decide between pursuing a PhD in AI safety research or working at a quant firm/E2G. A third wildcard career path would be a data-informed policy role where I could use my quantitative skills to help policymakers, but I’ve struggled to find roles like this that are both high impact and technically interesting (would love some help with this!).
I will be working at a quant trading firm (one of Citadel, Optiver, etc.) next summer as a software engineer and I currently work in an AI research lab at school, so I’m well set to pursue both career paths. It’s a question of which path is higher impact and will be the most rewarding for me. I’ll try to list out some pros and cons of the PhD and E2G routes (ignoring data-driven policy roles for now because haven’t found one of those jobs).
Quant Firm E2G Pros:
Potential for $1M+ donations within first 5-10 years
Great work life balance (<50 hours/week for the company I will be working at), perks, location, job security (again, specific to my company), and all around work environment
Guaranteed job offer; I’ve already passed the interview and finished all the prep work
I know E2G is high impact but sometimes it doesn’t feel that way; definitely feel like a sell out
AI PhD Pros:
Potentially super high impact, especially because I’m interested in reinforcement learning safety
Intellectually interesting; kind of a dream career
AI PhD Cons:
Success is not guaranteed: AI PhD programs are SUPER competitive. I would probably need to take a gap year and publish more papers before applying. Becoming a professor is even harder.
I love the idea of research and I have a lot of research experience, but the day-to-day work can be extremely frustrating; There’s a very real risk of getting burnt out during my PhD
If I don’t work in AI safety in particular, there’s a risk that the work I do might have as many negative effects as positive effects
I’d love to hear your thoughts on deciding between these roles as well as any ideas for the wildcard option that are worth exploring!
(Background: Have worked in trading since late 2013, with one ~18 month gap. Have also spoken to >5 people facing a decision similar to this one over the years. This is a set of points I often end up making. I’m moderately confident about each statement below but wouldn’t be surprised if one of them is wrong.)
I think both of these paths are very ‘spiky’, in the sense that I think the top 10% has many times more impact (either via donations or direct work) than the median. From a pure altruistic perspective, I think you mostly want to maximise the chance of spiking.
One of the best ways to maximise this is to be able to switch after you realise you aren’t in that category; in trading I think you’re likely to have a good sense of your appoximate path within 2-4 years, so in the likely even that you are not hitting the high end at that point you have an opportunity to switch, if your alternate allows you to switch (often but certainly not always the case). Similarly, I’d try to work out what flexibility you have on the AI PhD path; if you do in fact find the day-to-day frustrating and decide to quit in order to avoid burnout, what are your options? If you can switch either way, that’s great and this largely ceases to be an important consideration.
On a related note, and echoing some others, 2-4 years undoubtedly sounds like a long time now but it’s actually a pretty small fraction of your career. Don’t feel like a ‘bad’ choice now will forever condemn you to an inferior path! Any choice that does actually lock you in this way is probably a bad choice for that reason alone, but again usually this feels like it is the case more than it is actually the case.
Some people underestimate quant trading earnings at top-tier firms, because the firms are extremely cagey about them. FWIW, I think >$1m annual earnings within 5-10 years is more like the 40th-80th percentile case depending on firm, while the ‘spike’ case pushes into >$5m within their first decade. That’s for trading. Software engineer earnings at these firms I have a weaker sense of; they can either essentially match the traders or fall significantly behind (I don’t currently know of a case where it runs ahead); this depends heavily on the specific firm.
Predicting what you’re more likely to spike in advance is hard, and is going to be a question of personal fit, so difficult for us to answer, I just think that’s the right question. That said, there are some things in your post that make me think AI is more likely to be your life’s work:
I know E2G is high impact but sometimes it doesn’t feel that way; definitely feel like a sell out ....
Intellectually interesting; kind of a dream career
Sometimes people feel things like the above and assume everyone else feels the same way. As someone who has heard both pro-E2G and pro-AI versions of all of these points, the one thing I’m confident of is that this is not the case. As a community, there are worse ways we could coordinate this than having all the people who are more excited on a gut level about working at quant firms do that, and all the people who are more excited about working on AI do that. I do think there are more of the latter than the former, but that’s ok; one person E2G-ing at that level (especially if they spike) can fund many other people doing direct work.
The above is all written from a ‘For the Greater Good’ perspective. It sounds like a lot of what is pulling you towards quant work is that it’s the safe, already-available, almost-certainly-going-to-work-out-well-for-you-personally option. This is a reasonable thing to pay attention to! Just how much attention depends on things like your general level of financial security, do you have or expect to have dependents or other major family committments, how wide/deep is your career path if a particular opportunity goes away, what kinds of backup (e.g. parents you could move in with) do you have, etc. Try not to assume things will work out ok, but equally try not to assume one bad call will end it all if you have layers of protection. This is incredibly variable by individual, and most EA advice is calibrated for young people who are pretty secure, don’t have large external obligations, and have solid backup. This in turn strongly points to taking a higher-than-usual level of risk in your altruistically-motivated endeavours. If that’s you, then great. If not, it might be worth some reflection about to what extent you can afford to, e.g. aim for the PhD and fail.
I was in a similar situation to this with job offers from MIRI (research assistant) and a top quant trading firm (trading intern, with likely transition to full-time), four years ago.
I ended up taking the RA job, and not the internship. A few years later, I’m now a researcher at FHI, concurrently studying a stats PhD at Oxford.
I’m happy with what I decided, and I’d generally recommend people do the same, basically because I think there are enough multi-millionaire EAs to place talent at a large premium, relative to donations. Relative to you, I had a better background for trading, relative to academic AI—I played Poker and gambled successfully on political markets, but my education was in medicine and bioinformatics. So I think for someone like you, the case for a PhD would be stronger than for me.
That said, I do think it depends a lot on personal factors—how deeply interested in AI (safety) are you? How highly-ranked exactly are the quant firm, and the PhD where you end up getting an offer? And so on...
I’d be happy to provide more detailed public or private comments.
Congratulations on the quant trading firm offer! It sounds like right now you’re in a great overall position, and that you’re thinking things through really sensibly. A few thoughts:
With regard to earning to give, I’m sorry to hear that it doesn’t feel high impact. Do you think that might be better once you have money to donate, and are spending time carefully thinking through where that could do the most good? Or perhaps if you spent quite a bit of time chatting to other people earning to give, and so had more of the sense of being in a community doing that? If you haven’t yet, I wonder if it’s worth your chatting to some other people who have been earning to give for a while about how they’ve found it. Likewise talking to someone who has been a software engineer for a longish while about how they’ve found it over time sounds like it could be useful.
On going into AI, I don’t know that I’d be worried about having negative effects if you don’t work in AI safety, because I’d expect if you didn’t go into AI safety there would be other roles in things like medical tech AI which would useful rather than neutral in expectation. On the other hand, it does sound kind of worrying to me for this path that you like the idea of research but don’t sound keen on the day-to-day. I think everyone finds research frustrating some of the time, so that seems totally fine, but I think if your overall sense is that you wouldn’t enjoy doing the research needed for a PhD, probably that wouldn’t be a great path for you.
I’m guessing it’s pretty hard to turn down the quant firm offer, given that it’s a great job. If you decide you’re actually keen to see how you can do with publishing more in AI, I wonder if it’s worth asking them about pushing the offer for a while? They might be fine with you spending a few months or more publishing before coming to them.
I imagine that right now this is feeling like a stark crossroads, where you have to go one direction or the other for life. That doesn’t sound right to me—I know various people who have worked at a hedge fund for a few years and then gone into AI safety. Likewise if you do an AI PhD and decide against research, you’ll be in a great position to earn to give afterwards. So to the extent you can think of this decision as just one step forward, and as a further test of what you might like to do over the longer term, I think that sounds like a more accurate framing and one which will take the pressure off. It seems useful to remember that you’re in a great position, even if things are a bit intimidating right now!
You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It’s not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousand dollars is enough to stochastically save over a hundred lives. There are several high impact EA orgs with a budget of around a million dollars a year (Rethink Priorities comes to mind). If trading goes very well you could personally fund such an org.
How are you going to feel if you decide to do the PHD and after five years you decide that it was not the best path? You will have left approximately a million dollars and a huge amount of earning potential on the table. You could have been free to work for no compensation if you want. You would have been able to bankroll a medium sized project if you keep trading.
There are a lot of ways to massively regret turning down the quant job. It is plausible that the situation is so dire that you need to drop other paths and work on AI safety right now. But you need to be confident in a very detailed world model to justify giving up so much optionality. There are a lot of theories on how to do the most good. Stay upstream.
AI PhDs tend to be very well-compensated after graduating, so I don’t think personal financial constraints should be a big concern on that path.
More generally, skill in AI is going to be upstream of basically everything pretty soon; purely in terms of skill optionality, this seems much more valuable than being a quant.
In the world where your second paragraph is true, I’d expect the quant firms will start or have already started using AI heavily, and so by working as a software engineer at one of those firms you can expect to be able to build skills in that area. So then it’s a classic choice between ‘learning about something via a PhD’ versus ‘learning about something via working on a practical application’, which I generally think of as a YMMV question.
I’m curious if you expect the PhD to systematically have more optionality after accounting for that, if you weren’t already.
So there are a few different sources of optionality from a PhD: - Academic credentials - Technical skills - Research skills
Software engineer at a quant firm plausibly builds more general technical skills, but I expect many SWEs there work on infrastructure that has little to do with AI. I also don’t have a good sense for how fast quant firms are switching over to deep learning—I assume they’re on the leading edge, but maybe not all of them, or maybe they value interpretability too much to switch fully.
But I also think PhDs are pretty valuable for learning how to do innovative research at the frontiers of knowledge, and for the credentials. So it seems like one important question is: what’s the optionality for? If it’s for potentially switching to a different academic field, then PhD seems better. If it’s for leading a research organisation, same. Going into policy work, same. If it’s for founding a startup, harder to tell; depends on whether it’s an AI startup I guess.
Whereas I have more trouble picturing how a few years at a quant firm is helpful in switching to a different field, apart from the cash buffer. And I also had the impression that engineers at these places are usually compensated much worse than quants (although your earlier comment implies that this isn’t always the case?).
Actually, one other thing is that I was implicitly thinking about UK PhDs. My concern with US PhDs is that they can be so long. Which makes me more optimistic about getting some external work experience first, to get a better perspective from which to make that commitment (which is what I did).
That makes sense, thanks for the extra colour on PhDs.
Whereas I have more trouble picturing how a few years at a quant firm is helpful in switching to a different field, apart from the cash buffer.
I’ve heard variants on this a few times, so you aren’t alone. To give some extra colour on what I think you’re gaining from working at quant firms: Most of these firms still have a very start-up-like culture. That means that you get significant personal responsibility and significant personal choice about what you work on, within a generally supportive culture. In general this is valuable, but it means there isn’t one universal answer to this question. Still, some candidate skills I think you’ll get the opportunity to develop should you so choose.
(This list is illustrative based on my own experience, rather than exhaustive. Some of the above will apply to the PhD as well, it’s not intended as a comparison)
Consider tech roles in government! Governments do a lot of high-impact work, especially in the areas that most EAs care about (global health and development, long-term risks), so working in government could allow you to work directly on these areas and build connections that may open the doors to higher-impact work. If you’re a U.S. citizen, you can apply for the Civic Digital Fellowship (for students) or the Presidential Innovation Fellowship (for more seasoned technologists), both of which place technologists in the federal government.
Hello! I am a math BS, CS MS w/ 8 years experience, am in Fintech doing AI and deep learning (not as a quant, but close), so hopefully I can shed some light for you :) To cut to the chase, I’d strongly consider the quant trading firm option, largely just because you have a great offer and shouldn’t overlook that. (especially if you think the work-life balance will be good! That is a major downside to many trading jobs)
First, you can get 1000 opinions about a phD, but my personal opinion is to skip it. It does help lend some credibility, but sacrificing 5 years of career progression and salary is just such a high cost. I work with a lot of PhDs and I hold my own just fine.
Second, I’ve been donating 10% of my income for about 5 years now, and it ABSOLUTELY DOES feel good. Especially if you are like me and like looking at numbers and can let go of not “seeing” the impact first hand. I have a family member who did peace corp, and I feel just as strong connection to my impact as they do. I had the same hesitations you did, but I ultimately realized my desire to “feel” like I was doing good was more tied to a desire to “show off” on linked in that I was doing good based on my employer or title. Most friends and family don’t know I E2G, and I’m fine with that. I’m still doing a hell of a lot of good, and I sleep fine at night.
Third (not about PhD, but regarding quant vs policy), ask yourself: if you pick wrong, which switch will be easier. Few trade shops will hire someone with a non-technical policy background, even in AI. Many AI policy jobs would love to hire a highly technical person who has inside knowledge on how the financial industry works.
Finally, don’t underestimate the intellectual stimulation of a quant job. To an outsider, I stare at a stock market activity and python code all day. But I find it incredibly thought provoking. Our company has “journal clubs” and I find time to read ML articles and books. Obviously if you truly hate coding, then avoid it. But I still code daily, and it’s not nearly as “draining” as I expected. Don’t forget AI is hot stuff in fintech right now, so the two topics are not mutually exclusive.
Working at a ritzy quant firm shouldn’t impact your competitiveness for PhD programs too much (could even improve it), and if you’re getting $1M+ / 5y E2G-worthy offers halfway through ugrad (and have already published!), you’ll probably still be able to get comparable offers if you decide to e.g. master out. So in that regard, it probably doesn’t matter too much which path you take, since neither preclude reinvention.
If it were me, I’d take the bird in hand and work in the quant role… but if I felt myself able to make more meaningful “direct” contributions, focus on not just E2G’ing but also achieving financial independence as soon as possible. PhD program stipends are quite a bit lower than industry pay (at my current school, CS students only make around ~$45k / y), so being able to supplement that income with proceeds from investments would free you from monetary concerns and let you focus your attentions on more valuable pursuits (e.g. you wouldn’t have to waste time on unpleasant trivialities, like household chores, if you could instead hire a regular cleaning service + meal delivery. Hell, spend another year or two at the firm and get yourself a part-time personal assistant for the duration of the grad program to manage your emails for you haha). Focus on solving those claims on your time that can be most cheaply solved first, to give yourself greater opportunities to direct more valuable hours down the line.
Regarding the data-driven policy path, my sense is that unfortunately, most policy work in the U.S. today is not that data-driven, though there’s no doubt that that’s in part attributable to human capital constraints. Two exceptions do come to mind, though:
Macroeconomic stabilization policy (which is one of Open Philanthropy’s priority areas) definitely fits the bill. Much of the work on this in the U.S. occurs in the research and statistics and forecasting groups of various branches of the Federal Reserve System (especially New York, the Board of Governors in D.C., Boston, Chicago, and San Francisco). These groups employ mathematical tools like DSGE and HANK models to predict the effects of various (mainly but not exclusively monetary) policy regimes on the macroeconomy. Staff economists working on this modeling regularly produce research that makes it onto the desks of members of the Federal Open Markets Committee and even gets cited in Committee meetings (where U.S. monetary policy is determined). To succeed on this path in the long-term you would need to get a PhD in economics, which probably has many of the same downsides as a PhD in computer science/AI, but the path might have other advantages, depending on your personal interests, skills, values, motivations, etc. One thing I would note is that it is probably easier to get into econ PhD programs with a math-CS bachelor’s than you would think (though still very competitive, etc.). The top U.S. economics programs expect an extensive background in pure math (real analysis, abstract algebra, etc.), which is more common among people who studied math in undergrad than among people who studied economics alone. A good friend of mine actually just started her PhD in economics at MIT after getting her bachelor’s in math and computer science and doing two years of research at the Fed. This is not a particularly unusual path. If you’re interested and have any questions about it, feel free to dm me.
At least until the gutting of the CDC under our current presidential administration, it employed research teams full of specialists in the epidemiology of infectious disease who make use of fairly sophisticated mathematical models in their work. I would consider this work to be highly quantitative/data-driven, and it’s obviously pertinent to the mitigation of biorisks. To do it long-term, you would need a PhD in epidemiology (ideally) or a related field (biostatistics, computational biology, health data science, public health, etc.). These programs are also definitely easier to get into with your background than you would expect. They need people with strong technical skills, and no one leaves undergrad with a bachelor’s in epidemiology. You would probably have to get some relevant domain experience before applying to an epi PhD program, though, likely either by working on the research staff at someplace like the Harvard Center for Communicable Disease Dynamics or by getting an MS in epidemiology first (you would have no trouble gaining admission to one of those programs with your background). One big advantage of epidemiology relative to macroeconomics and AI is that (my sense is) it’s a much less competitive field (or at least it certainly was pre-pandemic), which probably has lots of benefits in terms of odds of success, risk of burnout, etc. Once again, feel free to dm me if this sounds interesting to you and you have any questions; I know people who have gone this route, as well.
It’s great that you have two very strong options! The answer probably comes down to your judgment on a few questions:
a) What’s the likelihood of a catastrophic AI accident in your lifetime?
b) What’s the likelihood your work could help prevent that?
c) Where would you donate if you earn to give?
(I’m tempted to try to convince you to earn to give, because the opportunity you describe sounds excellent for you and for the world, and I’m pretty sceptical about AI research and excited about bednets! But ultimately you’ll need to figure out your views on these things.)
I’m a third year college student at a top US university studying math and computer science. I’m struggling to decide between pursuing a PhD in AI safety research or working at a quant firm/E2G. A third wildcard career path would be a data-informed policy role where I could use my quantitative skills to help policymakers, but I’ve struggled to find roles like this that are both high impact and technically interesting (would love some help with this!).
I will be working at a quant trading firm (one of Citadel, Optiver, etc.) next summer as a software engineer and I currently work in an AI research lab at school, so I’m well set to pursue both career paths. It’s a question of which path is higher impact and will be the most rewarding for me. I’ll try to list out some pros and cons of the PhD and E2G routes (ignoring data-driven policy roles for now because haven’t found one of those jobs).
Quant Firm E2G Pros:
Potential for $1M+ donations within first 5-10 years
Great work life balance (<50 hours/week for the company I will be working at), perks, location, job security (again, specific to my company), and all around work environment
Guaranteed job offer; I’ve already passed the interview and finished all the prep work
Quant Firm E2G Cons:
Writing code long-term (20+ years) sounds incredibly draining
I know E2G is high impact but sometimes it doesn’t feel that way; definitely feel like a sell out
AI PhD Pros:
Potentially super high impact, especially because I’m interested in reinforcement learning safety
Intellectually interesting; kind of a dream career
AI PhD Cons:
Success is not guaranteed: AI PhD programs are SUPER competitive. I would probably need to take a gap year and publish more papers before applying. Becoming a professor is even harder.
I love the idea of research and I have a lot of research experience, but the day-to-day work can be extremely frustrating; There’s a very real risk of getting burnt out during my PhD
If I don’t work in AI safety in particular, there’s a risk that the work I do might have as many negative effects as positive effects
I’d love to hear your thoughts on deciding between these roles as well as any ideas for the wildcard option that are worth exploring!
(Background: Have worked in trading since late 2013, with one ~18 month gap. Have also spoken to >5 people facing a decision similar to this one over the years. This is a set of points I often end up making. I’m moderately confident about each statement below but wouldn’t be surprised if one of them is wrong.)
I think both of these paths are very ‘spiky’, in the sense that I think the top 10% has many times more impact (either via donations or direct work) than the median. From a pure altruistic perspective, I think you mostly want to maximise the chance of spiking.
One of the best ways to maximise this is to be able to switch after you realise you aren’t in that category; in trading I think you’re likely to have a good sense of your appoximate path within 2-4 years, so in the likely even that you are not hitting the high end at that point you have an opportunity to switch, if your alternate allows you to switch (often but certainly not always the case). Similarly, I’d try to work out what flexibility you have on the AI PhD path; if you do in fact find the day-to-day frustrating and decide to quit in order to avoid burnout, what are your options? If you can switch either way, that’s great and this largely ceases to be an important consideration.
On a related note, and echoing some others, 2-4 years undoubtedly sounds like a long time now but it’s actually a pretty small fraction of your career. Don’t feel like a ‘bad’ choice now will forever condemn you to an inferior path! Any choice that does actually lock you in this way is probably a bad choice for that reason alone, but again usually this feels like it is the case more than it is actually the case.
Some people underestimate quant trading earnings at top-tier firms, because the firms are extremely cagey about them. FWIW, I think >$1m annual earnings within 5-10 years is more like the 40th-80th percentile case depending on firm, while the ‘spike’ case pushes into >$5m within their first decade. That’s for trading. Software engineer earnings at these firms I have a weaker sense of; they can either essentially match the traders or fall significantly behind (I don’t currently know of a case where it runs ahead); this depends heavily on the specific firm.
Predicting what you’re more likely to spike in advance is hard, and is going to be a question of personal fit, so difficult for us to answer, I just think that’s the right question. That said, there are some things in your post that make me think AI is more likely to be your life’s work:
Sometimes people feel things like the above and assume everyone else feels the same way. As someone who has heard both pro-E2G and pro-AI versions of all of these points, the one thing I’m confident of is that this is not the case. As a community, there are worse ways we could coordinate this than having all the people who are more excited on a gut level about working at quant firms do that, and all the people who are more excited about working on AI do that. I do think there are more of the latter than the former, but that’s ok; one person E2G-ing at that level (especially if they spike) can fund many other people doing direct work.
The above is all written from a ‘For the Greater Good’ perspective. It sounds like a lot of what is pulling you towards quant work is that it’s the safe, already-available, almost-certainly-going-to-work-out-well-for-you-personally option. This is a reasonable thing to pay attention to! Just how much attention depends on things like your general level of financial security, do you have or expect to have dependents or other major family committments, how wide/deep is your career path if a particular opportunity goes away, what kinds of backup (e.g. parents you could move in with) do you have, etc. Try not to assume things will work out ok, but equally try not to assume one bad call will end it all if you have layers of protection. This is incredibly variable by individual, and most EA advice is calibrated for young people who are pretty secure, don’t have large external obligations, and have solid backup. This in turn strongly points to taking a higher-than-usual level of risk in your altruistically-motivated endeavours. If that’s you, then great. If not, it might be worth some reflection about to what extent you can afford to, e.g. aim for the PhD and fail.
Hey Anon,
I was in a similar situation to this with job offers from MIRI (research assistant) and a top quant trading firm (trading intern, with likely transition to full-time), four years ago.
I ended up taking the RA job, and not the internship. A few years later, I’m now a researcher at FHI, concurrently studying a stats PhD at Oxford.
I’m happy with what I decided, and I’d generally recommend people do the same, basically because I think there are enough multi-millionaire EAs to place talent at a large premium, relative to donations. Relative to you, I had a better background for trading, relative to academic AI—I played Poker and gambled successfully on political markets, but my education was in medicine and bioinformatics. So I think for someone like you, the case for a PhD would be stronger than for me.
That said, I do think it depends a lot on personal factors—how deeply interested in AI (safety) are you? How highly-ranked exactly are the quant firm, and the PhD where you end up getting an offer? And so on...
I’d be happy to provide more detailed public or private comments.
Congratulations on the quant trading firm offer! It sounds like right now you’re in a great overall position, and that you’re thinking things through really sensibly. A few thoughts:
For examples of data drive policy roles, I wonder if you’d be interested in the type of research that the Center for Security and Emerging Technology does?
With regard to earning to give, I’m sorry to hear that it doesn’t feel high impact. Do you think that might be better once you have money to donate, and are spending time carefully thinking through where that could do the most good? Or perhaps if you spent quite a bit of time chatting to other people earning to give, and so had more of the sense of being in a community doing that? If you haven’t yet, I wonder if it’s worth your chatting to some other people who have been earning to give for a while about how they’ve found it. Likewise talking to someone who has been a software engineer for a longish while about how they’ve found it over time sounds like it could be useful.
On going into AI, I don’t know that I’d be worried about having negative effects if you don’t work in AI safety, because I’d expect if you didn’t go into AI safety there would be other roles in things like medical tech AI which would useful rather than neutral in expectation. On the other hand, it does sound kind of worrying to me for this path that you like the idea of research but don’t sound keen on the day-to-day. I think everyone finds research frustrating some of the time, so that seems totally fine, but I think if your overall sense is that you wouldn’t enjoy doing the research needed for a PhD, probably that wouldn’t be a great path for you.
I’m guessing it’s pretty hard to turn down the quant firm offer, given that it’s a great job. If you decide you’re actually keen to see how you can do with publishing more in AI, I wonder if it’s worth asking them about pushing the offer for a while? They might be fine with you spending a few months or more publishing before coming to them.
I imagine that right now this is feeling like a stark crossroads, where you have to go one direction or the other for life. That doesn’t sound right to me—I know various people who have worked at a hedge fund for a few years and then gone into AI safety. Likewise if you do an AI PhD and decide against research, you’ll be in a great position to earn to give afterwards. So to the extent you can think of this decision as just one step forward, and as a further test of what you might like to do over the longer term, I think that sounds like a more accurate framing and one which will take the pressure off. It seems useful to remember that you’re in a great position, even if things are a bit intimidating right now!
You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It’s not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousand dollars is enough to stochastically save over a hundred lives. There are several high impact EA orgs with a budget of around a million dollars a year (Rethink Priorities comes to mind). If trading goes very well you could personally fund such an org.
How are you going to feel if you decide to do the PHD and after five years you decide that it was not the best path? You will have left approximately a million dollars and a huge amount of earning potential on the table. You could have been free to work for no compensation if you want. You would have been able to bankroll a medium sized project if you keep trading.
There are a lot of ways to massively regret turning down the quant job. It is plausible that the situation is so dire that you need to drop other paths and work on AI safety right now. But you need to be confident in a very detailed world model to justify giving up so much optionality. There are a lot of theories on how to do the most good. Stay upstream.
AI PhDs tend to be very well-compensated after graduating, so I don’t think personal financial constraints should be a big concern on that path.
More generally, skill in AI is going to be upstream of basically everything pretty soon; purely in terms of skill optionality, this seems much more valuable than being a quant.
In the world where your second paragraph is true, I’d expect the quant firms will start or have already started using AI heavily, and so by working as a software engineer at one of those firms you can expect to be able to build skills in that area. So then it’s a classic choice between ‘learning about something via a PhD’ versus ‘learning about something via working on a practical application’, which I generally think of as a YMMV question.
I’m curious if you expect the PhD to systematically have more optionality after accounting for that, if you weren’t already.
So there are a few different sources of optionality from a PhD:
- Academic credentials
- Technical skills
- Research skills
Software engineer at a quant firm plausibly builds more general technical skills, but I expect many SWEs there work on infrastructure that has little to do with AI. I also don’t have a good sense for how fast quant firms are switching over to deep learning—I assume they’re on the leading edge, but maybe not all of them, or maybe they value interpretability too much to switch fully.
But I also think PhDs are pretty valuable for learning how to do innovative research at the frontiers of knowledge, and for the credentials. So it seems like one important question is: what’s the optionality for? If it’s for potentially switching to a different academic field, then PhD seems better. If it’s for leading a research organisation, same. Going into policy work, same. If it’s for founding a startup, harder to tell; depends on whether it’s an AI startup I guess.
Whereas I have more trouble picturing how a few years at a quant firm is helpful in switching to a different field, apart from the cash buffer. And I also had the impression that engineers at these places are usually compensated much worse than quants (although your earlier comment implies that this isn’t always the case?).
Actually, one other thing is that I was implicitly thinking about UK PhDs. My concern with US PhDs is that they can be so long. Which makes me more optimistic about getting some external work experience first, to get a better perspective from which to make that commitment (which is what I did).
That makes sense, thanks for the extra colour on PhDs.
I’ve heard variants on this a few times, so you aren’t alone. To give some extra colour on what I think you’re gaining from working at quant firms: Most of these firms still have a very start-up-like culture. That means that you get significant personal responsibility and significant personal choice about what you work on, within a generally supportive culture. In general this is valuable, but it means there isn’t one universal answer to this question. Still, some candidate skills I think you’ll get the opportunity to develop should you so choose.
Project management
People management
Hiring
Judgement (in the narrow 80k sense of the term)
(This list is illustrative based on my own experience, rather than exhaustive. Some of the above will apply to the PhD as well, it’s not intended as a comparison)
Consider tech roles in government! Governments do a lot of high-impact work, especially in the areas that most EAs care about (global health and development, long-term risks), so working in government could allow you to work directly on these areas and build connections that may open the doors to higher-impact work. If you’re a U.S. citizen, you can apply for the Civic Digital Fellowship (for students) or the Presidential Innovation Fellowship (for more seasoned technologists), both of which place technologists in the federal government.
Hello! I am a math BS, CS MS w/ 8 years experience, am in Fintech doing AI and deep learning (not as a quant, but close), so hopefully I can shed some light for you :) To cut to the chase, I’d strongly consider the quant trading firm option, largely just because you have a great offer and shouldn’t overlook that. (especially if you think the work-life balance will be good! That is a major downside to many trading jobs)
First, you can get 1000 opinions about a phD, but my personal opinion is to skip it. It does help lend some credibility, but sacrificing 5 years of career progression and salary is just such a high cost. I work with a lot of PhDs and I hold my own just fine.
Second, I’ve been donating 10% of my income for about 5 years now, and it ABSOLUTELY DOES feel good. Especially if you are like me and like looking at numbers and can let go of not “seeing” the impact first hand. I have a family member who did peace corp, and I feel just as strong connection to my impact as they do. I had the same hesitations you did, but I ultimately realized my desire to “feel” like I was doing good was more tied to a desire to “show off” on linked in that I was doing good based on my employer or title. Most friends and family don’t know I E2G, and I’m fine with that. I’m still doing a hell of a lot of good, and I sleep fine at night.
Third (not about PhD, but regarding quant vs policy), ask yourself: if you pick wrong, which switch will be easier. Few trade shops will hire someone with a non-technical policy background, even in AI. Many AI policy jobs would love to hire a highly technical person who has inside knowledge on how the financial industry works.
Finally, don’t underestimate the intellectual stimulation of a quant job. To an outsider, I stare at a stock market activity and python code all day. But I find it incredibly thought provoking. Our company has “journal clubs” and I find time to read ML articles and books. Obviously if you truly hate coding, then avoid it. But I still code daily, and it’s not nearly as “draining” as I expected. Don’t forget AI is hot stuff in fintech right now, so the two topics are not mutually exclusive.
Good luck!!
Working at a ritzy quant firm shouldn’t impact your competitiveness for PhD programs too much (could even improve it), and if you’re getting $1M+ / 5y E2G-worthy offers halfway through ugrad (and have already published!), you’ll probably still be able to get comparable offers if you decide to e.g. master out. So in that regard, it probably doesn’t matter too much which path you take, since neither preclude reinvention.
If it were me, I’d take the bird in hand and work in the quant role… but if I felt myself able to make more meaningful “direct” contributions, focus on not just E2G’ing but also achieving financial independence as soon as possible. PhD program stipends are quite a bit lower than industry pay (at my current school, CS students only make around ~$45k / y), so being able to supplement that income with proceeds from investments would free you from monetary concerns and let you focus your attentions on more valuable pursuits (e.g. you wouldn’t have to waste time on unpleasant trivialities, like household chores, if you could instead hire a regular cleaning service + meal delivery. Hell, spend another year or two at the firm and get yourself a part-time personal assistant for the duration of the grad program to manage your emails for you haha). Focus on solving those claims on your time that can be most cheaply solved first, to give yourself greater opportunities to direct more valuable hours down the line.
Regarding the data-driven policy path, my sense is that unfortunately, most policy work in the U.S. today is not that data-driven, though there’s no doubt that that’s in part attributable to human capital constraints. Two exceptions do come to mind, though:
Macroeconomic stabilization policy (which is one of Open Philanthropy’s priority areas) definitely fits the bill. Much of the work on this in the U.S. occurs in the research and statistics and forecasting groups of various branches of the Federal Reserve System (especially New York, the Board of Governors in D.C., Boston, Chicago, and San Francisco). These groups employ mathematical tools like DSGE and HANK models to predict the effects of various (mainly but not exclusively monetary) policy regimes on the macroeconomy. Staff economists working on this modeling regularly produce research that makes it onto the desks of members of the Federal Open Markets Committee and even gets cited in Committee meetings (where U.S. monetary policy is determined). To succeed on this path in the long-term you would need to get a PhD in economics, which probably has many of the same downsides as a PhD in computer science/AI, but the path might have other advantages, depending on your personal interests, skills, values, motivations, etc. One thing I would note is that it is probably easier to get into econ PhD programs with a math-CS bachelor’s than you would think (though still very competitive, etc.). The top U.S. economics programs expect an extensive background in pure math (real analysis, abstract algebra, etc.), which is more common among people who studied math in undergrad than among people who studied economics alone. A good friend of mine actually just started her PhD in economics at MIT after getting her bachelor’s in math and computer science and doing two years of research at the Fed. This is not a particularly unusual path. If you’re interested and have any questions about it, feel free to dm me.
At least until the gutting of the CDC under our current presidential administration, it employed research teams full of specialists in the epidemiology of infectious disease who make use of fairly sophisticated mathematical models in their work. I would consider this work to be highly quantitative/data-driven, and it’s obviously pertinent to the mitigation of biorisks. To do it long-term, you would need a PhD in epidemiology (ideally) or a related field (biostatistics, computational biology, health data science, public health, etc.). These programs are also definitely easier to get into with your background than you would expect. They need people with strong technical skills, and no one leaves undergrad with a bachelor’s in epidemiology. You would probably have to get some relevant domain experience before applying to an epi PhD program, though, likely either by working on the research staff at someplace like the Harvard Center for Communicable Disease Dynamics or by getting an MS in epidemiology first (you would have no trouble gaining admission to one of those programs with your background). One big advantage of epidemiology relative to macroeconomics and AI is that (my sense is) it’s a much less competitive field (or at least it certainly was pre-pandemic), which probably has lots of benefits in terms of odds of success, risk of burnout, etc. Once again, feel free to dm me if this sounds interesting to you and you have any questions; I know people who have gone this route, as well.
It’s great that you have two very strong options! The answer probably comes down to your judgment on a few questions: a) What’s the likelihood of a catastrophic AI accident in your lifetime? b) What’s the likelihood your work could help prevent that? c) Where would you donate if you earn to give?
(I’m tempted to try to convince you to earn to give, because the opportunity you describe sounds excellent for you and for the world, and I’m pretty sceptical about AI research and excited about bednets! But ultimately you’ll need to figure out your views on these things.)