Hello, Aaron! My name is also Aaron, and I help to run the Forum.
Computer science is an excellent starting point as a college major; it feeds into many other fields and gives you an easy way to take part in a huge number of projects if you decide to do that.
80k hours also writes that potential AI safety researchers should have “strong technical abilities (at the level of a top 10 cs or math phd program globally)”, which seems really scary and difficult.
I wouldn’t worry about this sort of thing at the outset. You’re at the top of your class, you’re studying linear algebra, and you’ve been doing math for fun—those are all good signs. If you look at the profiles of the people who are actually doing AI safety work, I think you’ll find quite a few whose educational backgrounds don’t match this profile.
I’ve read that AI alignment is clearly an urgent global issue that’s very talent constrained, but AI itself is a super competitive field which which has way more supply than demand [...] I’m not sure if I’m willing to commit that much of my life to enter such a competitive field.
Some competitive fields are very risky to pursue, because the skills you train in the process aren’t very lucrative outside of the competitive slots. Professional sports and orchestral music are two examples of risky paths like this.
However, if you don’t get a position in AI safety, the skills you learned along the way will have been very lucrative. You might be slightly hindered if you’ve been spending time on obscure safety-related topics rather than something more commercial, but you’ll also have a network of contacts in the EA and AI safety communities (which are pretty well-connected in these areas).
There are also a bunch of ways to “test” your skills in this area before you start applying to full-time jobs; for example, some organizations in the field have events and workshops aimed at students and other non-experts, and there are places like this forum and LessWrong where you can publish ideas and get feedback from people who work at AI safety orgs.
So I don’t think you have to worry about committing too much of your life in this way, as long as you spend at least some of your time learning skills that will make you a solid candidate for industry jobs. (This doesn’t mean that AI safety is necessarily the best thing for you to do out of every possible path you could pursue—I just don’t think you should be wary of it for this reason.)
I’m also wondering how much experience one needs in order to effectively contribute within AI safety; this article seems to suggest that with enough dedication, someone who comes from a competitive role in industry and doesn’t have a PhD can still impact the field in a positive way.
You didn’t include a link to a specific article, but this sounds correct to me. AI safety is a very young field and there’s a lot of work to be done; this means there should be good opportunities to make progress without having to spend many years developing expertise beforehand.
In addition, I’m super passionate about US elections analysis; I love reading stuff like the NYT Upshot/Nate Cohn/Dave Wasserman and learning about voting patterns and elections forecasting and demography; I was wondering if there is anybody in EA who shares similar interests, and whether there is any EA-related paths that relate to voting/elections/democracy in the US.
There’s definitely some of this in EA! You might be interested in:
The Center for Election Science, which fights for plurality voting in the U.S. and has received a lot of grant funding from EA-aligned donors. It’s led by Aaron Hamlin, who is deeply passionate about improving our voting system (and is one of my personal favorite Aarons).
The Open Model Project—this isn’t really an “EA” project, but one of their team members, Peter Hurford, is a longtime member of the community. If you want to do polling-related work, he could be a good person to talk to.
Finally, I’m also applying to college this year, and I was wondering if there are any specific universities which have strong EA communities. I’ve applied to most of the UCs, ASU, U of A, and Cal Poly so I’m specifically wondering about those.
You can find a fairly comprehensive list of EA groups here.
Of the schools you listed: UC Berkeley has a sizable EA community and is located in one of the world capitals of EA (the other is Oxford, UK). UC San Diego has a moderately active group; I also live within walking distance of the school, so drop me a note if you end up there :-)
Not sure about the rest of your list.
I’m honestly really worried for what the future holds for our world and excited for how I might be able to do something about it.
This seems like the ideal way to be thinking as a high-school senior. There are reasons to worry, but you’re in a good position to make a really big impact. College will be busy, and you’ll be exposed to lots of new ideas, but I hope you stay interested and involved with EA! Maybe I’ll see you at a conference in a year or two.
Hello, Aaron! My name is also Aaron, and I help to run the Forum.
Computer science is an excellent starting point as a college major; it feeds into many other fields and gives you an easy way to take part in a huge number of projects if you decide to do that.
I wouldn’t worry about this sort of thing at the outset. You’re at the top of your class, you’re studying linear algebra, and you’ve been doing math for fun—those are all good signs. If you look at the profiles of the people who are actually doing AI safety work, I think you’ll find quite a few whose educational backgrounds don’t match this profile.
Some competitive fields are very risky to pursue, because the skills you train in the process aren’t very lucrative outside of the competitive slots. Professional sports and orchestral music are two examples of risky paths like this.
However, if you don’t get a position in AI safety, the skills you learned along the way will have been very lucrative. You might be slightly hindered if you’ve been spending time on obscure safety-related topics rather than something more commercial, but you’ll also have a network of contacts in the EA and AI safety communities (which are pretty well-connected in these areas).
There are also a bunch of ways to “test” your skills in this area before you start applying to full-time jobs; for example, some organizations in the field have events and workshops aimed at students and other non-experts, and there are places like this forum and LessWrong where you can publish ideas and get feedback from people who work at AI safety orgs.
So I don’t think you have to worry about committing too much of your life in this way, as long as you spend at least some of your time learning skills that will make you a solid candidate for industry jobs. (This doesn’t mean that AI safety is necessarily the best thing for you to do out of every possible path you could pursue—I just don’t think you should be wary of it for this reason.)
You didn’t include a link to a specific article, but this sounds correct to me. AI safety is a very young field and there’s a lot of work to be done; this means there should be good opportunities to make progress without having to spend many years developing expertise beforehand.
There’s definitely some of this in EA! You might be interested in:
The Center for Election Science, which fights for plurality voting in the U.S. and has received a lot of grant funding from EA-aligned donors. It’s led by Aaron Hamlin, who is deeply passionate about improving our voting system (and is one of my personal favorite Aarons).
Rethink Priorities’ work on ballot initiatives (no need to read this whole thing, it’s just an example of EA people going deep on election-related work)
This post on electoral reform
The Open Model Project—this isn’t really an “EA” project, but one of their team members, Peter Hurford, is a longtime member of the community. If you want to do polling-related work, he could be a good person to talk to.
You can find a fairly comprehensive list of EA groups here.
Of the schools you listed: UC Berkeley has a sizable EA community and is located in one of the world capitals of EA (the other is Oxford, UK). UC San Diego has a moderately active group; I also live within walking distance of the school, so drop me a note if you end up there :-)
Not sure about the rest of your list.
This seems like the ideal way to be thinking as a high-school senior. There are reasons to worry, but you’re in a good position to make a really big impact. College will be busy, and you’ll be exposed to lots of new ideas, but I hope you stay interested and involved with EA! Maybe I’ll see you at a conference in a year or two.
Thanks a lot for your detailed response. It was really clarifying and I appreciate it :)