Hello there, just introducing myself (also kinda a xpost from reddit) :)
My name is Aaron, and I’m an senior in high school from Arizona. I discovered the EA rabbit hole while parsing through Applied Divinity Studies over winter break; it’s super amazing and humbling to see yall smart people come together to find efficient ways to make the world better.
I’m fairly set on studying computer science in college because it fits the criteria of something that i’m relatively interested in/have a bit of experience in/pays well. I like to have (ever changing and sometimes unrealistic) long term goals about what I want to do in my life, and I was pretty sold on the idea of becoming a software engineer in the bay, living below my means, and end up retiring early in a low cost of living area to spend my time volunteering and doing things that I really wanna do in life.
When I discovered EA I was really interested in the idea of working in AI Safety, which 80000 hours categorizes as one of the most urgent global issue to work on. I know virtually nothing about machine learning or artificial intelligence, but its one of the things I wanna learn about now that my senioritis is kicking into full gear! I love statistics and thinking about the implications of tech on society and public policy, so it seems like something i’d might be able to contribute towards. I’m also studying linear algebra and multivar this year, which I understand are important mathematical concepts to grasp when studying machine learning. I really don’t know anything about this field, however; I just know that AI safety is clearly an important field that EA seems to agree requires more research in, so I’m not sure if it will be a good fit for me until I try it.
80k hours also writes that potential AI safety researchers should have “strong technical abilities (at the level of a top 10 cs or math phd program globally)”, which seems really scary and difficult. I’m graduating at the top of my high school class and I’ve dabbled in a bit of stuff like AMC and codeforces in high school, but i’m definitely not extremely gifted or talented, nor do I spend a lot of time on these things.
I’ve read that AI alignment is clearly an urgent global issue that’s very talent constrained, but AI itself is a super competitive field which which has way more supply than demand. It seems that the people who work in AI safety are the cream of the crop who graduate as PhDs from the world’s best universities, dedicating many years to solve these really important questions. I think that if I work hard enough in college, I might be able to contribute in such an effective way. However, I’m not sure if I’m willing to commit that much of my life to enter such a competitive field, when I could be making much more in the industry and potentially retire early and volunteer my time in other ways/earn to give within the EA community. I’m also wondering how much experience one needs in order to effectively contribute within AI safety; this article seems to suggest that with enough dedication, someone who comes from a competitive role in industry and doesn’t have a PhD can still impact the field in a positive way.
In addition, I’m super passionate about US elections analysis; I love reading stuff like the NYT Upshot/Nate Cohn/Dave Wasserman and learning about voting patterns and elections forecasting and demography; I was wondering if there is anybody in EA who shares similar interests, and whether there is any EA-related paths that relate to voting/elections/democracy in the US.
Finally, I’m also applying to college this year, and I was wondering if there are any specific universities which have strong EA communities. I’ve applied to most of the UCs, ASU, U of A, and Cal Poly so I’m specifically wondering about those.
Thanks so much for reading this far down a post by a random high schooler :) Someone on the EA subreddit told me that I shouldn’t be this worried about my future and that I don’t need to make these decisions right now, but I’m honestly really worried for what the future holds for our world and excited for how I might be able to do something about it, so I hope that I can receive a bit of advice on how to best be a member of the EA community in the future. Looking through all the stuff that happens within the community is really inspiring and humbling, and I’m really excited to hearing your advice. Thanks!
Hi! Yes, I work on AI safety but like many others here I like to follow Dave Wasserman etc. Michael Sadowsky is one person who works with political data full-time. Whether you want to work on AI safety, political data, or just earning, then studying CS or statistics is an ideal starting point. I would suggest picking AI-relevant classes at a good school, and maybe trying some research, and that should set you up well whatever path you end up pursuing.
Hello, Aaron! My name is also Aaron, and I help to run the Forum.
Computer science is an excellent starting point as a college major; it feeds into many other fields and gives you an easy way to take part in a huge number of projects if you decide to do that.
80k hours also writes that potential AI safety researchers should have “strong technical abilities (at the level of a top 10 cs or math phd program globally)”, which seems really scary and difficult.
I wouldn’t worry about this sort of thing at the outset. You’re at the top of your class, you’re studying linear algebra, and you’ve been doing math for fun—those are all good signs. If you look at the profiles of the people who are actually doing AI safety work, I think you’ll find quite a few whose educational backgrounds don’t match this profile.
I’ve read that AI alignment is clearly an urgent global issue that’s very talent constrained, but AI itself is a super competitive field which which has way more supply than demand [...] I’m not sure if I’m willing to commit that much of my life to enter such a competitive field.
Some competitive fields are very risky to pursue, because the skills you train in the process aren’t very lucrative outside of the competitive slots. Professional sports and orchestral music are two examples of risky paths like this.
However, if you don’t get a position in AI safety, the skills you learned along the way will have been very lucrative. You might be slightly hindered if you’ve been spending time on obscure safety-related topics rather than something more commercial, but you’ll also have a network of contacts in the EA and AI safety communities (which are pretty well-connected in these areas).
There are also a bunch of ways to “test” your skills in this area before you start applying to full-time jobs; for example, some organizations in the field have events and workshops aimed at students and other non-experts, and there are places like this forum and LessWrong where you can publish ideas and get feedback from people who work at AI safety orgs.
So I don’t think you have to worry about committing too much of your life in this way, as long as you spend at least some of your time learning skills that will make you a solid candidate for industry jobs. (This doesn’t mean that AI safety is necessarily the best thing for you to do out of every possible path you could pursue—I just don’t think you should be wary of it for this reason.)
I’m also wondering how much experience one needs in order to effectively contribute within AI safety; this article seems to suggest that with enough dedication, someone who comes from a competitive role in industry and doesn’t have a PhD can still impact the field in a positive way.
You didn’t include a link to a specific article, but this sounds correct to me. AI safety is a very young field and there’s a lot of work to be done; this means there should be good opportunities to make progress without having to spend many years developing expertise beforehand.
In addition, I’m super passionate about US elections analysis; I love reading stuff like the NYT Upshot/Nate Cohn/Dave Wasserman and learning about voting patterns and elections forecasting and demography; I was wondering if there is anybody in EA who shares similar interests, and whether there is any EA-related paths that relate to voting/elections/democracy in the US.
There’s definitely some of this in EA! You might be interested in:
The Center for Election Science, which fights for plurality voting in the U.S. and has received a lot of grant funding from EA-aligned donors. It’s led by Aaron Hamlin, who is deeply passionate about improving our voting system (and is one of my personal favorite Aarons).
The Open Model Project—this isn’t really an “EA” project, but one of their team members, Peter Hurford, is a longtime member of the community. If you want to do polling-related work, he could be a good person to talk to.
Finally, I’m also applying to college this year, and I was wondering if there are any specific universities which have strong EA communities. I’ve applied to most of the UCs, ASU, U of A, and Cal Poly so I’m specifically wondering about those.
You can find a fairly comprehensive list of EA groups here.
Of the schools you listed: UC Berkeley has a sizable EA community and is located in one of the world capitals of EA (the other is Oxford, UK). UC San Diego has a moderately active group; I also live within walking distance of the school, so drop me a note if you end up there :-)
Not sure about the rest of your list.
I’m honestly really worried for what the future holds for our world and excited for how I might be able to do something about it.
This seems like the ideal way to be thinking as a high-school senior. There are reasons to worry, but you’re in a good position to make a really big impact. College will be busy, and you’ll be exposed to lots of new ideas, but I hope you stay interested and involved with EA! Maybe I’ll see you at a conference in a year or two.
Hi! If you’re interested in CS, I suggest checking out the public interest tech movement. I’ve been involved in public interest tech for over 4 years, and recently I’ve been thinking about the intersection of EA and public interest tech.
The Civic Digital Fellowship is a 10-week tech internship in the federal government that is open to college students who are U.S. citizens. I encourage you to apply once you start college.
I also recommend checking out Impact Labs, founded by fellow EA Aaron Mayer. They run a winter fellowship and a summer internship program every year.
There are many other opportunities in public interest tech; some are more aligned with EA causes than others. I can’t list them all but you can use this page as a starting point.
Hello there, just introducing myself (also kinda a xpost from reddit) :)
My name is Aaron, and I’m an senior in high school from Arizona. I discovered the EA rabbit hole while parsing through Applied Divinity Studies over winter break; it’s super amazing and humbling to see yall smart people come together to find efficient ways to make the world better.
I’m fairly set on studying computer science in college because it fits the criteria of something that i’m relatively interested in/have a bit of experience in/pays well. I like to have (ever changing and sometimes unrealistic) long term goals about what I want to do in my life, and I was pretty sold on the idea of becoming a software engineer in the bay, living below my means, and end up retiring early in a low cost of living area to spend my time volunteering and doing things that I really wanna do in life.
When I discovered EA I was really interested in the idea of working in AI Safety, which 80000 hours categorizes as one of the most urgent global issue to work on. I know virtually nothing about machine learning or artificial intelligence, but its one of the things I wanna learn about now that my senioritis is kicking into full gear! I love statistics and thinking about the implications of tech on society and public policy, so it seems like something i’d might be able to contribute towards. I’m also studying linear algebra and multivar this year, which I understand are important mathematical concepts to grasp when studying machine learning. I really don’t know anything about this field, however; I just know that AI safety is clearly an important field that EA seems to agree requires more research in, so I’m not sure if it will be a good fit for me until I try it.
80k hours also writes that potential AI safety researchers should have “strong technical abilities (at the level of a top 10 cs or math phd program globally)”, which seems really scary and difficult. I’m graduating at the top of my high school class and I’ve dabbled in a bit of stuff like AMC and codeforces in high school, but i’m definitely not extremely gifted or talented, nor do I spend a lot of time on these things.
I’ve read that AI alignment is clearly an urgent global issue that’s very talent constrained, but AI itself is a super competitive field which which has way more supply than demand. It seems that the people who work in AI safety are the cream of the crop who graduate as PhDs from the world’s best universities, dedicating many years to solve these really important questions. I think that if I work hard enough in college, I might be able to contribute in such an effective way. However, I’m not sure if I’m willing to commit that much of my life to enter such a competitive field, when I could be making much more in the industry and potentially retire early and volunteer my time in other ways/earn to give within the EA community. I’m also wondering how much experience one needs in order to effectively contribute within AI safety; this article seems to suggest that with enough dedication, someone who comes from a competitive role in industry and doesn’t have a PhD can still impact the field in a positive way.
In addition, I’m super passionate about US elections analysis; I love reading stuff like the NYT Upshot/Nate Cohn/Dave Wasserman and learning about voting patterns and elections forecasting and demography; I was wondering if there is anybody in EA who shares similar interests, and whether there is any EA-related paths that relate to voting/elections/democracy in the US.
Finally, I’m also applying to college this year, and I was wondering if there are any specific universities which have strong EA communities. I’ve applied to most of the UCs, ASU, U of A, and Cal Poly so I’m specifically wondering about those.
Thanks so much for reading this far down a post by a random high schooler :) Someone on the EA subreddit told me that I shouldn’t be this worried about my future and that I don’t need to make these decisions right now, but I’m honestly really worried for what the future holds for our world and excited for how I might be able to do something about it, so I hope that I can receive a bit of advice on how to best be a member of the EA community in the future. Looking through all the stuff that happens within the community is really inspiring and humbling, and I’m really excited to hearing your advice. Thanks!
Hi! Yes, I work on AI safety but like many others here I like to follow Dave Wasserman etc. Michael Sadowsky is one person who works with political data full-time. Whether you want to work on AI safety, political data, or just earning, then studying CS or statistics is an ideal starting point. I would suggest picking AI-relevant classes at a good school, and maybe trying some research, and that should set you up well whatever path you end up pursuing.
Hello, Aaron! My name is also Aaron, and I help to run the Forum.
Computer science is an excellent starting point as a college major; it feeds into many other fields and gives you an easy way to take part in a huge number of projects if you decide to do that.
I wouldn’t worry about this sort of thing at the outset. You’re at the top of your class, you’re studying linear algebra, and you’ve been doing math for fun—those are all good signs. If you look at the profiles of the people who are actually doing AI safety work, I think you’ll find quite a few whose educational backgrounds don’t match this profile.
Some competitive fields are very risky to pursue, because the skills you train in the process aren’t very lucrative outside of the competitive slots. Professional sports and orchestral music are two examples of risky paths like this.
However, if you don’t get a position in AI safety, the skills you learned along the way will have been very lucrative. You might be slightly hindered if you’ve been spending time on obscure safety-related topics rather than something more commercial, but you’ll also have a network of contacts in the EA and AI safety communities (which are pretty well-connected in these areas).
There are also a bunch of ways to “test” your skills in this area before you start applying to full-time jobs; for example, some organizations in the field have events and workshops aimed at students and other non-experts, and there are places like this forum and LessWrong where you can publish ideas and get feedback from people who work at AI safety orgs.
So I don’t think you have to worry about committing too much of your life in this way, as long as you spend at least some of your time learning skills that will make you a solid candidate for industry jobs. (This doesn’t mean that AI safety is necessarily the best thing for you to do out of every possible path you could pursue—I just don’t think you should be wary of it for this reason.)
You didn’t include a link to a specific article, but this sounds correct to me. AI safety is a very young field and there’s a lot of work to be done; this means there should be good opportunities to make progress without having to spend many years developing expertise beforehand.
There’s definitely some of this in EA! You might be interested in:
The Center for Election Science, which fights for plurality voting in the U.S. and has received a lot of grant funding from EA-aligned donors. It’s led by Aaron Hamlin, who is deeply passionate about improving our voting system (and is one of my personal favorite Aarons).
Rethink Priorities’ work on ballot initiatives (no need to read this whole thing, it’s just an example of EA people going deep on election-related work)
This post on electoral reform
The Open Model Project—this isn’t really an “EA” project, but one of their team members, Peter Hurford, is a longtime member of the community. If you want to do polling-related work, he could be a good person to talk to.
You can find a fairly comprehensive list of EA groups here.
Of the schools you listed: UC Berkeley has a sizable EA community and is located in one of the world capitals of EA (the other is Oxford, UK). UC San Diego has a moderately active group; I also live within walking distance of the school, so drop me a note if you end up there :-)
Not sure about the rest of your list.
This seems like the ideal way to be thinking as a high-school senior. There are reasons to worry, but you’re in a good position to make a really big impact. College will be busy, and you’ll be exposed to lots of new ideas, but I hope you stay interested and involved with EA! Maybe I’ll see you at a conference in a year or two.
Thanks a lot for your detailed response. It was really clarifying and I appreciate it :)
Hi! If you’re interested in CS, I suggest checking out the public interest tech movement. I’ve been involved in public interest tech for over 4 years, and recently I’ve been thinking about the intersection of EA and public interest tech.
The Civic Digital Fellowship is a 10-week tech internship in the federal government that is open to college students who are U.S. citizens. I encourage you to apply once you start college.
I also recommend checking out Impact Labs, founded by fellow EA Aaron Mayer. They run a winter fellowship and a summer internship program every year.
There are many other opportunities in public interest tech; some are more aligned with EA causes than others. I can’t list them all but you can use this page as a starting point.