I have recently graduated from the University of Manchester with a degree in Physics. I have written my dissertation all about how ChatGPT works and also conducted an experiment on its effectiveness at marking physics lab reports. (If interested in reading, I have linked below)
I am really keen to start a career in AI safety, as I believe this is the most pressing matter that we face.
I am based in Manchester and would love some advice on what my next steps should be to choose a career in AI that will make the biggest impact.
Along the way, I’d recommend doing some projects, hackathons, writing/commenting (e.g. on LessWrong and EA Forum on AI safety content), keeping an eye on (i) being useful, (ii) getting feedback and improving, (iii) creating legible evidence of your skills.
It’s also useful to speak to a lot of people, to (i) find and build relationships/trust with people who care about similar things as you, again, and (ii) get feedback on your understanding/plans/projects, and as a result, improve. You can find communities online and in person, e.g. here. I’ve written a little about this earlier here.
Finally, start applying, and quite broadly! Since you’re early in your career, it’s likely you can learn a lot outside of AI safety that can be meaningfully applied within AI safety. Look at target roles carefully, and consider also applying to anything that look like ‘stepping stones’ or ‘nearest neighbours’ to those roles.
Over time, as your context and understanding of AI safety grows, you can take sharper actions aimed at specific roles, agendas, and orgs; as you’re just getting started, consider getting a wide range of exposure to various pieces of AI safety. Here’s a list of resources that I maintain, but there are others, e.g. on 80k, and on aisafety.com
Hello,
I have recently graduated from the University of Manchester with a degree in Physics. I have written my dissertation all about how ChatGPT works and also conducted an experiment on its effectiveness at marking physics lab reports. (If interested in reading, I have linked below)
I am really keen to start a career in AI safety, as I believe this is the most pressing matter that we face.
I am based in Manchester and would love some advice on what my next steps should be to choose a career in AI that will make the biggest impact.
https://www.linkedin.com/posts/ronnie-yaniv-921492346_dissertation-activity-7331315337654673410-UT0L?utm_medium=ios_app&rcm=ACoAAFaRe6kBQL90wPd3qWvu6aPqfdPVp5LhX4E&utm_source=social_share_send&utm_campaign=copy_link
Hi Ronnie,
I’d start by reading through BlueDot’s Future of AI course to get a better picture of the problem, and then continue to gain context by reading about various agendas, skills (e.g. research engineering), fellowships, organisations, and so on.
Along the way, I’d recommend doing some projects, hackathons, writing/commenting (e.g. on LessWrong and EA Forum on AI safety content), keeping an eye on (i) being useful, (ii) getting feedback and improving, (iii) creating legible evidence of your skills.
It’s also useful to speak to a lot of people, to (i) find and build relationships/trust with people who care about similar things as you, again, and (ii) get feedback on your understanding/plans/projects, and as a result, improve. You can find communities online and in person, e.g. here. I’ve written a little about this earlier here.
Finally, start applying, and quite broadly! Since you’re early in your career, it’s likely you can learn a lot outside of AI safety that can be meaningfully applied within AI safety. Look at target roles carefully, and consider also applying to anything that look like ‘stepping stones’ or ‘nearest neighbours’ to those roles.
Over time, as your context and understanding of AI safety grows, you can take sharper actions aimed at specific roles, agendas, and orgs; as you’re just getting started, consider getting a wide range of exposure to various pieces of AI safety. Here’s a list of resources that I maintain, but there are others, e.g. on 80k, and on aisafety.com