I occasionally get asked how to find jobs in “epistemics” or “epistemics+AI”.
I think my current take is that most people are much better off chasing jobs in AI Safety. There’s just a dramatically larger ecosystem there—both of funding and mentorship.
I suspect that “AI Safety” will eventually encompass a lot of “AI+epistemics”. There’s already work on truth and sycophancy for example, there are a lot of research directions I expect to be fruitful.
I’d naively expect that a lot of the ultimate advancements in the next 5 years around this topic will come from AI labs. They’re the main ones with the money and resources.
Other groups can still do valuable work. I’m still working under an independent nonprofit, for instance. But I expect a lot of the value of my work will come from ideating and experimenting with directions that would later get scaled up by larger labs.
I occasionally get asked how to find jobs in “epistemics” or “epistemics+AI”.
I think my current take is that most people are much better off chasing jobs in AI Safety. There’s just a dramatically larger ecosystem there—both of funding and mentorship.
I suspect that “AI Safety” will eventually encompass a lot of “AI+epistemics”. There’s already work on truth and sycophancy for example, there are a lot of research directions I expect to be fruitful.
I’d naively expect that a lot of the ultimate advancements in the next 5 years around this topic will come from AI labs. They’re the main ones with the money and resources.
Other groups can still do valuable work. I’m still working under an independent nonprofit, for instance. But I expect a lot of the value of my work will come from ideating and experimenting with directions that would later get scaled up by larger labs.