I’m wondering what people’s opinions are on how urgent alignment work is. I’m a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn’t get in. I’ve also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.
My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it’ll free me up to actually do more AI safety research? I’m uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
It’s really hard to know without knowledge of how much a nanny costs, your financial situation and how much you’d value being able to look after your child yourself.
If you’d be fine with a nanny looking after your child, then it is likely worthwhile spending a significant amount of money in order to discover whether you would have a strong fit for alignment research sooner.
I would also suggest that switching out of AI completely was likely a mistake. I’m not suggesting that you should have continued advancing fundamental AI capabilities, but the vast majority of jobs in AI relate to building AI applications rather than advancing fundamental capabilities. Those jobs won’t have a significant effect on shortening timelines, but will allow you further develop your skills in AI.
Another thing to consider: if at some point you decide that you are unlikely to break into technical AI safety research, it may be worthwhile to look at contributing in an auxiliary manner, ie. through mentorship or teaching or movement-building.
I’m wondering what people’s opinions are on how urgent alignment work is. I’m a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn’t get in. I’ve also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.
My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it’ll free me up to actually do more AI safety research? I’m uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
It’s really hard to know without knowledge of how much a nanny costs, your financial situation and how much you’d value being able to look after your child yourself.
If you’d be fine with a nanny looking after your child, then it is likely worthwhile spending a significant amount of money in order to discover whether you would have a strong fit for alignment research sooner.
I would also suggest that switching out of AI completely was likely a mistake. I’m not suggesting that you should have continued advancing fundamental AI capabilities, but the vast majority of jobs in AI relate to building AI applications rather than advancing fundamental capabilities. Those jobs won’t have a significant effect on shortening timelines, but will allow you further develop your skills in AI.
Another thing to consider: if at some point you decide that you are unlikely to break into technical AI safety research, it may be worthwhile to look at contributing in an auxiliary manner, ie. through mentorship or teaching or movement-building.