In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically—kudos to them. Maybe I should ask them why they haven’t expanded to AI strategy or if they have takes on that pipeline.
I know that around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance. I’m not sure if the idea is still alive, though.
Also, CBAI ran a pilot AI strategy research fellowship this past winter, which I participated in and found worthwhile. At the time they were, I think, planning on running a bigger version of the fellowship in the summer, though it appears that’s no longer happening.
no summer research programs this year from [...] SERI SRF
On the other hand, ERA, formerly known as CERI, and CHERIare running fellowships this summer, and I expect they’ll both have several AI governance fellows. (Though I do also expect, from what I know of these programs, that their AI governance focus will be more on applied governance than on strategy/theoretical governance. I don’t have a strong stance on whether this is overall positive or negative, but it does mean there’s less of an AI strategy pipeline.)
around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance
I’m on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they’ve rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn’t strongly die (i.e. I don’t think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I’ll loop you in with the people who were discussing it.
Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.
Correct that CBAI does not have plans to run a research fellowship this summer (though we might do one again in the winter), but we are tentatively planning on running a short workshop this summer that I think will at least slightly ease this bottleneck by connecting people worried about AI safety to the US AI risks policy community in DC—stay tuned (and email me at trevor [at] cbai [dot] ai if you’d want to be notified when we open applications).
(And I heard MATS almost had a couple strategy/governance mentors. Will ask them.)
(Again, thanks for being constructive, and in the spirit of giving credit, yay to GovAI, ERA, and CHERI for their summer programs. [This is yay for them trying; I have no knowledge of the programs and whether they’re good.])
(I now realize my above comments probably don’t show this, but I do agree with you that the AI strategy(+governance) pipeline is looking particularly weak at present, and that the situation is pretty undignified given that building this pipeline is perhaps one of the most important things we—the EA movement/community—could be doing.)
I know that around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance. I’m not sure if the idea is still alive, though.
Also, CBAI ran a pilot AI strategy research fellowship this past winter, which I participated in and found worthwhile. At the time they were, I think, planning on running a bigger version of the fellowship in the summer, though it appears that’s no longer happening.
On the other hand, ERA, formerly known as CERI, and CHERI are running fellowships this summer, and I expect they’ll both have several AI governance fellows. (Though I do also expect, from what I know of these programs, that their AI governance focus will be more on applied governance than on strategy/theoretical governance. I don’t have a strong stance on whether this is overall positive or negative, but it does mean there’s less of an AI strategy pipeline.)
I’m on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they’ve rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn’t strongly die (i.e. I don’t think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I’ll loop you in with the people who were discussing it.
Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.
Correct that CBAI does not have plans to run a research fellowship this summer (though we might do one again in the winter), but we are tentatively planning on running a short workshop this summer that I think will at least slightly ease this bottleneck by connecting people worried about AI safety to the US AI risks policy community in DC—stay tuned (and email me at trevor [at] cbai [dot] ai if you’d want to be notified when we open applications).
(And I heard MATS almost had a couple strategy/governance mentors. Will ask them.)
(Again, thanks for being constructive, and in the spirit of giving credit, yay to GovAI, ERA, and CHERI for their summer programs. [This is yay for them trying; I have no knowledge of the programs and whether they’re good.])
(I now realize my above comments probably don’t show this, but I do agree with you that the AI strategy(+governance) pipeline is looking particularly weak at present, and that the situation is pretty undignified given that building this pipeline is perhaps one of the most important things we—the EA movement/community—could be doing.)