Hey, great that you’re thinking about this at this stage.
I hope that people with more experience in e.g. AI risk work will chime in, but here are a few quick thoughts from someone who did a bachelor’s and master’s in maths, has done research related to existential risk, and now does project management for an organization doing such research.
I think either of maths, physics, or computer science can in principle be very solid degree choices. I could easily see it being the case that the decisive factor for you could be which you feel most interested in right now, or which universities you can get into for these different disciplines.
Picking up the last point, I think the choice of university could easily be more important than the choice of subject. You say you want to stay near Zurich, but perhaps there are different universities you could reach from there (e.g. I think Zurich itself has at least two?). On the other hand, don’t sweat it. I think that especially in quantitative subjects and at the undergraduate level, university prestige is less important, and at least in the German-speaking area there aren’t actually huge differences in the quality of education that are correlated with university prestige.
However, this will still be a significant factor, especially in some careers.
Similarly, what you do within your degree can easily be more important than its subject. I.e. which courses do you take, which topic do you write your thesis in, etc. In particular, if you’re interested in AI risk, there is a lot of advice available on what to prioritize within your degree (see e.g. here + links therein).
Finally, what you do outside of your degree can easily be more important than its subject. For example, deep learning—an area highly relevant to AI risk—has very low barriers to entry compared to most areas of maths or physics. It could be good to find out early if you’re interested in and good at machine learning, for instance by taking online courses such as this one or working through OpenAI’s “Spinning Up” materials. This stuff really doesn’t require much prior knowledge; it could be accessible to you even know, or else after the first 1-2 years of study.
Yes, much (though not all) research on technical AI safety involves machine learning. For this, you’ll need programming and software engineering skills, and you learn those in a computer science degree. However, you can also learn them in other degrees, or even fairly easily pick them up on the side. In addition, even the programming aspect of machine learning is overall importantly different from traditional programming.
On the other hand, for basically all technical AI safety research you’ll need maths. You may be able to learn this better in a maths or physics than a CS degree; generally it’s easier to move from a more abstract and theoretical background to more applied work than the other way around, so maths may leave most options open.
Yes, in maths or physics you’ll learn many things that aren’t very relevant to the AI risk work you may end up doing, but that’s true for computer science as well. E.g. there are probably at most a few niches for how to apply courses on computability theory or databases (both typical CS subjects) to AI. On the other hand, I struggled to think of areas of physics that are clearly totally irrelevant to all AI risk work!
Some of the above points apply less if you can access undergraduate degrees that focus specifically on e.g. machine learning. But even then, note that it’s very possible to move later from maths or physics to machine learning, but harder the other way around.
I’ve talked a lot about AI risk because you mentioned it, but I wouldn’t narrow down on AI risk too quickly. Quantitative degrees leave open a lot of options including, for instance, global priorities research or some of the less explored paths mentioned here and here. Examples of mathematicians who’ve later done great EA-relevant work that’s neither in academic mathematics nor AI risk include Owen Cotton-Barratt and David Roodman.
Talking to students doing the degrees you’re considering at the universities you’re considering can be a good source of information about how the degree is actually like and similar things.
I’m currently studying a statistics PhD while researching AI safety, after a bioinformatics msc and medical undergrad. I agree with some parts of this, but would contest others.
I agree that:
What you do within a major can matter more than which major you choose
It’s easier to move from math and physics to CS.
But it’s still easier to move from CS to CS, than from physics or pure math. And CS is where a decent majority of AI safety work is done. The second-most prevalent subject is statistics, due to its containing statistical learning (aka machine learning) and causal inference, although these are areas of research that are equally performed in a CS department. So if impact was the only concern, starting with CS would still be my advice, followed by statistics.
First of all, I want to thank you for this extremely extensive and well-thought out message, this is extremely helpful, thank you very much! As for the university, with the degree that I will have the ETH Zurich makes most sense, which is the furthest one can go in the country unfortunately.
Ahh yes the Andrew Ng course is great, I’m still on it but that’s a great idea, and I’ll check out the OpenAI course as well!
I also want to thank you a lot for your thoughts on degree choice (also in the context of AI safety), that was my first priority to figure out—and your thoughts on that were very helpful.
The note on global priorities research was also really interesting! That was actually a really good point, for some reason I had written GPR off in my mind, but it is actually a great idea. Perhaps the proximity to Geneva and EU citizenship may be useful in that regard
I’ve only just started digging into this post because it is so rich, so I will definitely be checking out more!
Hey, great that you’re thinking about this at this stage.
I hope that people with more experience in e.g. AI risk work will chime in, but here are a few quick thoughts from someone who did a bachelor’s and master’s in maths, has done research related to existential risk, and now does project management for an organization doing such research.
I think either of maths, physics, or computer science can in principle be very solid degree choices. I could easily see it being the case that the decisive factor for you could be which you feel most interested in right now, or which universities you can get into for these different disciplines.
Picking up the last point, I think the choice of university could easily be more important than the choice of subject. You say you want to stay near Zurich, but perhaps there are different universities you could reach from there (e.g. I think Zurich itself has at least two?). On the other hand, don’t sweat it. I think that especially in quantitative subjects and at the undergraduate level, university prestige is less important, and at least in the German-speaking area there aren’t actually huge differences in the quality of education that are correlated with university prestige.
However, this will still be a significant factor, especially in some careers.
Similarly, what you do within your degree can easily be more important than its subject. I.e. which courses do you take, which topic do you write your thesis in, etc. In particular, if you’re interested in AI risk, there is a lot of advice available on what to prioritize within your degree (see e.g. here + links therein).
Finally, what you do outside of your degree can easily be more important than its subject. For example, deep learning—an area highly relevant to AI risk—has very low barriers to entry compared to most areas of maths or physics. It could be good to find out early if you’re interested in and good at machine learning, for instance by taking online courses such as this one or working through OpenAI’s “Spinning Up” materials. This stuff really doesn’t require much prior knowledge; it could be accessible to you even know, or else after the first 1-2 years of study.
I don’t think that research on AI risk requires a degree in computer science. There are many mathematicians and physicists doing technical AI safety research, and more broadly for reducing AI risk we’ll need social scientists, law scholars, policy advisors, and generally a multitude of people with a variety of expertise.
Yes, much (though not all) research on technical AI safety involves machine learning. For this, you’ll need programming and software engineering skills, and you learn those in a computer science degree. However, you can also learn them in other degrees, or even fairly easily pick them up on the side. In addition, even the programming aspect of machine learning is overall importantly different from traditional programming.
On the other hand, for basically all technical AI safety research you’ll need maths. You may be able to learn this better in a maths or physics than a CS degree; generally it’s easier to move from a more abstract and theoretical background to more applied work than the other way around, so maths may leave most options open.
Yes, in maths or physics you’ll learn many things that aren’t very relevant to the AI risk work you may end up doing, but that’s true for computer science as well. E.g. there are probably at most a few niches for how to apply courses on computability theory or databases (both typical CS subjects) to AI. On the other hand, I struggled to think of areas of physics that are clearly totally irrelevant to all AI risk work!
Some of the above points apply less if you can access undergraduate degrees that focus specifically on e.g. machine learning. But even then, note that it’s very possible to move later from maths or physics to machine learning, but harder the other way around.
I’ve talked a lot about AI risk because you mentioned it, but I wouldn’t narrow down on AI risk too quickly. Quantitative degrees leave open a lot of options including, for instance, global priorities research or some of the less explored paths mentioned here and here. Examples of mathematicians who’ve later done great EA-relevant work that’s neither in academic mathematics nor AI risk include Owen Cotton-Barratt and David Roodman.
Talking to students doing the degrees you’re considering at the universities you’re considering can be a good source of information about how the degree is actually like and similar things.
I’m currently studying a statistics PhD while researching AI safety, after a bioinformatics msc and medical undergrad. I agree with some parts of this, but would contest others.
I agree that:
What you do within a major can matter more than which major you choose
It’s easier to move from math and physics to CS.
But it’s still easier to move from CS to CS, than from physics or pure math. And CS is where a decent majority of AI safety work is done. The second-most prevalent subject is statistics, due to its containing statistical learning (aka machine learning) and causal inference, although these are areas of research that are equally performed in a CS department. So if impact was the only concern, starting with CS would still be my advice, followed by statistics.
I’d agree with the above. I also wanted to check you’ve seen our generic advice here – it’s a pretty rough article, so many people haven’t seen it: https://80000hours.org/articles/advice-for-undergraduates/
Hey there!
First of all, I want to thank you for this extremely extensive and well-thought out message, this is extremely helpful, thank you very much! As for the university, with the degree that I will have the ETH Zurich makes most sense, which is the furthest one can go in the country unfortunately.
Ahh yes the Andrew Ng course is great, I’m still on it but that’s a great idea, and I’ll check out the OpenAI course as well!
I also want to thank you a lot for your thoughts on degree choice (also in the context of AI safety), that was my first priority to figure out—and your thoughts on that were very helpful.
The note on global priorities research was also really interesting! That was actually a really good point, for some reason I had written GPR off in my mind, but it is actually a great idea. Perhaps the proximity to Geneva and EU citizenship may be useful in that regard
I’ve only just started digging into this post because it is so rich, so I will definitely be checking out more!
Also happy to help on a more local level: eazurich.org/join
If you’re not already in contact with EA Zürich, just sent us a mail and we will get back to you: info@eazurich.org .