I plan on trying to get into AI Safety this week. Any recommendations as to where I start? What are the prerequisites?
Some info about me:
I’m fairly familiar with integrals and statistics.
I read Preventing an AI-related catastrophe − 80,000 Hours (80000hours.org)
I’m currently watching Robert Miles’s YouTube videos about AI safety—both the videos on his channel and on computerphile.
Update: I think maybe I’ll just try and look at bottlenecks/singular, clearly defined issues, with a clear definition as to what a solution would be; in the field of AI, look at the prerequisites, and learn about them, and do it on a rolling basis. Are there any problems with this approach, if so, how do I fix it, and if not, are some of these bottlenecks/problems?
The courses on https://aisafetyfundamentals.com/ are a common starting point for people!
I recomend AISafety.com and, if you are looking for introductions in video/ audio form, I like this selection (in part because I contributed to it).
None of them are that technical though, and given that you mentioned your math knowledge, seems that’s what you’re interested on.
In that case, the thing that you have to know is that it is said that the field is “pre-paradigmatic”, so there’s not any type of consensus about how is the best way to think about the problems and therefore how would even the potential solutions would look like or come to be. But there’s work outside ML that is purely mathematical and related to agent foundations, and this introductory post that I’ve just found seems to explain all that stuff better than I could and would probably be more useful for you than the other links.
Thanks! that will come in handy; I’ll let you know how it goes.
Raemon, a moderator on Lesswrong, recommends Scott Alexander’s Superintelligence FAQ.
I’ll look at it in a sec, thanks!