I really enjoyed this post, thank you for writing it. I’m commenting from an AI law and policy centric view, so this comment is mainly aimed at that angle.
I agree with much of your post, but I want to highlight that there is a need for social scientists in some areas of AI Safety research. I have worked on a few projects for the UK government around AI Safety, helping to build legal, regulatory, and mitigation strategies in the AI Safety field. This is often part of an interdisciplinary team. A few of us are usually sociologists which, with me having a mixed CompSci and Law background, was initially a big change. They were massively useful. I think the importance of understanding human society and how it functions is often woefully underestimated in the AI Safety field. It may or may not have a place in the purely hard-line technical AI Safety area, (I’d be the wrong person to ask), but in terms of governance and policy specialisms such as sociology and economics are very important. If anything, there’s a bit of a lack of people with those expertise area with adequate knowledge of AI. So if there is someone who is, for example, a sociology PhD with a big interest in AI, there are definitely opportunities available.
The hard part is finding them. One of the weird niggles of AI Policy/Governance is that it’s heavily network-based in that you have to build and maintain relationships as a core resource. This means someone starting out without someone there to guide/help can face a real challenge. Another of the downsides is that sometimes (quite rarely) the work/research/projects is secret or NDA, so people don’t always get to talk about the work they did in as much detail when applying for fellowships, jobs etc.
This is why I think orgs which run fellowships in this area are important—they’re a jumpstart on the network element and can help better guide people to new specialisms.
I really enjoyed this post, thank you for writing it. I’m commenting from an AI law and policy centric view, so this comment is mainly aimed at that angle.
I agree with much of your post, but I want to highlight that there is a need for social scientists in some areas of AI Safety research. I have worked on a few projects for the UK government around AI Safety, helping to build legal, regulatory, and mitigation strategies in the AI Safety field. This is often part of an interdisciplinary team. A few of us are usually sociologists which, with me having a mixed CompSci and Law background, was initially a big change. They were massively useful. I think the importance of understanding human society and how it functions is often woefully underestimated in the AI Safety field. It may or may not have a place in the purely hard-line technical AI Safety area, (I’d be the wrong person to ask), but in terms of governance and policy specialisms such as sociology and economics are very important. If anything, there’s a bit of a lack of people with those expertise area with adequate knowledge of AI. So if there is someone who is, for example, a sociology PhD with a big interest in AI, there are definitely opportunities available.
The hard part is finding them. One of the weird niggles of AI Policy/Governance is that it’s heavily network-based in that you have to build and maintain relationships as a core resource. This means someone starting out without someone there to guide/help can face a real challenge. Another of the downsides is that sometimes (quite rarely) the work/research/projects is secret or NDA, so people don’t always get to talk about the work they did in as much detail when applying for fellowships, jobs etc.
This is why I think orgs which run fellowships in this area are important—they’re a jumpstart on the network element and can help better guide people to new specialisms.
Edit reason: typo