AI Safety Career Bottlenecks Survey Responses Responses

A few months ago, AI Safety Support conducted an AI Safety Career Bottlenecks Survey (if you missed it you are still welcome to respond).

One of the questions asked was “Anything you would like help with? What would make you more efficient? Give us your wish list.

This blogpost is a list of my responses to the survey responses to that question. This was originally an email sent to all survey responders who gave us their email address (and responded before March 4). The only difference is that I’ve corrected some typos, and updated some information.

This blogpost is not a complete list of all wishes. I’ve only listed things I have some sort of useful response too.

Answered Wishes

General

A list of resources.

Studies

Opportunities to work with someone (preferably more experienced than me) doing research

  • Apply to AI Safety Camp

  • Several AI Safety research groups have internships. There will probably be some opportunities like this opening up in the summer. Though they are usually very competitive.

  • We’ll probably do another round of our mentorship program, but we don’t know yet when that will be.

A suggested reading list

Good books on the subject would help

Access to online courses in AI safety, ideally with tough problem sets, quizzes, and tests

  • Here you go. This project ended due to lack of funding, but the lessons that were completed are fully functional, with quizzes and everything, as far as I know.

Career and Funding

I want to know how to get into AI safety

  • That depends on where you start. We are happy to give you personal advice. Just reach out.

A place where positions in the field are advertised

Detailed wishlists from employers on desired qualifications for a safety researcher

  • No one knows exactly what you need to know to do AI Safety research, though some people have made educated guesses (see wish regarding reading list).

  • The most important factor for getting an AI Safety research job, is having your own ideas and opinions about what research you should do. Even internships often require you to write your own research proposals. My suggestion is to learn what you need to start asking your own questions, and after that learn what you need to answer those questions.

Information on positions with lower education thresholds to serve as stepping off points for an aspiring researcher.

  • Some industry labs (e.g. Deepmind) hires research engineers, which have lower official requirements than other research jobs. But these jobs are very competitive.

  • “positions with lower education thresholds to serve as stepping off points for an aspiring researcher.” This is exactly what a PhD is.

  • You can also apply for a grant from the Long-Term Future Fund, and make your own path.

I would love to have more of an understanding of how people in degrees outside of computer science, neurology and philosophy can contribute to this field. I do believe that economics has a lot to offer, but I struggle to see any opportunities.

  • I agree with you that most likely other fields have a lot to offer, and you are also correct in noticing that there are no established AI Safety research approaches building on those fields. Remember that AI Safety is a super early stage, so you should not expect the research path to be laid out for you.

  • Make your own opportunity by finding your own research questions. However this is hard, so please reach out if you want more help.

Graduate programs focused on this topic.

  • OpenPhil provides a PhD fellowship for AI Safety research

  • If you get accepted as an ML PhD student at UC Berkeley, you can join Center for Human Compatible AI

  • The Safe & Trusted AI program is mainly about near-term AI Safety, but allows long-term safety work, and several of the current students are interested in long-term AI Safety

  • Some other places that seems like a good place to do an AI Safety PhD (non-exhaustive):

    • Stanford, Oxford, Toronto, MIT, Amsterdam, Cambridge

    • If you want to help make a better resource for this, let us know.

  • AI Safety is a growing field. There are still not many AI Safety professors, but more and more universities have a few postdoc or PhD students interested in this topic.

  • For more discussion on this, join the AI Alignment Slack and go to the channel #applying-for-grad-school.

Not needing a day job.

I just want decent pay, job security, and to see the results of my work.

  • As a rule, junior researchers don’t get job security. This is not great, but that’s unfortunately how things are at the moment.

    • If you have access to lots of money and want to help improve this situation, let us know.

  • If you want financial security, you should go for a career in some area that pays well. When you have saved up some money, you can provide a salary either for future you or for some other AI Safety researcher.

  • There is unfortunately no way to ensure that you will see the results of your research efforts. There will always be the risk that you are researching a dead end, and even when things go well, the progress will be uneven and unpredictable. If this will make you unhappy, then don’t do research.

Information About Active Research Areas

I want AI Safety academic journals.

  • There is no AI Safety journal, but we have something better: The AI Alignment Forum. (I’m serious about this. Peer-review sucks, and journals don’t even have a comment section. However some people I respect do disagree with me.)

  • Many AI conferences have an attached AI Safety workshop (usually a mix of long-term and short-term), where you can publish.

Some interesting problems that you (people working in AI Safety) are working on.

Better maps of the space of AI research would probably be helpful.

  • This one is good, but I don’t know if it is being kept up to date.

I wish that there were a searchable and frequently updated database of papers that explicitly focus on AI safety.

  • Transformative AI Safety Bibliographic Database seems to fit the description. You can search the database here. See explanation post here.

  • There’s also Alignment Newsletter Database. Everything summarized in the Alignment Newsletter goes here. So it fulfills “frequently updated”, but it is not a complete list of all AI Safety work and it is not very searchable unless you count Crtr+f.

Community and Support

Hanging out with AI Alignment researchers

A community of people to work with.

Cure for executive function issues

  • Shay helps those in the AI safety field to optimize their physical and mental health. You can book her here.

  • EA Coaching—Productivity coaching for people who want to change the world

  • CFAR Coaching (Beta) - “We find that two minds are often better than one, and we’d like to offer ours to help you.”

Feeling part of a community of like minded people

  • This is important! Make sure to make friends along the way. We hope to be able to provide some community with our online events. But when the world opens up again, go to in person events if you can.

  • See previous questions for ways to connect.

Other

A bit of GPU/​TPU time

  • I’ve been told that TensorFlow Research Cloud gives free substantial TPU access to most if not all people who ask. The main issue is that TPUs are much harder to work with than GPUs. In terms of renting GPUs, vast.ai is very cheap.

Easier access to OpenAI’s technologies.

  • EleutherAI is creating an open source GPT-3 replica, and some other projects.

Good resources to explain AI safety to people with less knowledge about AI