It takes like 20 hours of focused reading to get basic context on AI risk and threat models. Once you have that, I feel like you can read everything important in x-risk-focused AI policy in 100 hours. Same for x-risk-focused AI corporate governance, AI forecasting, and macrostrategy.
[Edit: read everything important doesn’t mean you have nothing left to learn; it means something like you have context to appreciate ~all papers, and you can follow ~all conversations in the field except between sub-specialists, and you have the generators of good overviews like 12 tentative ideas for US AI policy.]
Am I wrong?
Actually yes, I’m imagining going back and speedrunning learning; if you’re not an expert then you’re much worse at (1) figuring out what to prioritize reading and (2) skimming. But still, 300 hours each, or 200 with a good reading list, or 150 with a great reading list.
This is wild. Normal fields require more like 10,000 hours engagement before you reach the frontier, and much more to read everything important. Right?
Why aren’t more people at the frontier in these four areas?
Normal fields have textbooks and syllabi and lit reviews. Those are awesome for learning quickly. We should have better reading lists. I should make reading lists.
My opportunity cost is high for several weeks; I’ll plan to try this in December. I should be able to make fine 100-hour reading lists on these four topics in 1 day each, or good ones in a week each.
I will be tempted to read too much stuff I haven’t already read. (But I should skim anything I haven’t read in e.g. https://course.aisafetyfundamentals.com/governance.) And I will have the curse of knowledge regarding prerequisites/context and what’s-hard-to-understand. Shrug.
Maybe I can just get someone else to make great reading lists...
Why don’t there exist better reading lists / syllabi, especially beyond introductory stuff?
A reading list will be out of date in 6 months. Hmm. Maybe updating it wouldn’t actually be that hard?
I sometimes post (narrow) reading lists on the forum. Are those actually helpful to anyone? Would they be helpful if they got more attention? I almost never know who uses them. If I did know, talking to those people might be helpful.
If I actually try to make great/canonical AI governance reading lists, I should:
Check out all the existing reading lists: public ones + see private airable + student fellowships on governance (Harvard/MIT/Stanford) + reading lists given to new GovAI fellows or IAPS staff
Ask various people for advice + input + feedback: Mauricio, Michael, Matthijs, David, AISF folks, various slacks; plus experts on various particular topics like “takeoff speed”
Think about target audience. Actually talk to people in potential target audiences.
I don’t know whether alignment is similar. I suspect alignment has a lack of reading lists too.
The lack of lists of (research) project ideas (not to mention research agendas) in AI safety is even worse than the lack of reading lists. Can I fix that?
[Check out + talk to people who run] some of: ERA, CHERI, PIBBSS, AI safety student groups (Harvard/MIT/Stanford), AISF, SPAR, AI Safety Camp, Alignment Jam, AI Safety Hubs Labs, GovAI fellowship (see private docs “GovAI Fellowship—Research project ideas” and “GovAI Summer Fellowship Handbook”), MATS, Astra
Did AI Safety Ideas try and fail to solve this problem? Talk to Esben?
Look for other existing lists (public and private)
Ask various relevant researchers & orgs for (lists of) project ideas?
For most AI governance researchers, I don’t know what they’re working on. That’s really costly and feels like it should be cheap to fix. I’m aware of one attempt to fix this; it failed and I don’t understand why.
I disagree-voted because I feel like I’ve done much more than 100-hours of reading on AI Policy (including finishing the AI Safety Fundamentals Governance course) and still have a strong sense there’s a lot I don’t know, and regularly come across new work that I find insightful. Very possibly I’m prioritising reading the wrong things (and would really value a reading list!) but thought I’d share my experience as a data point.
The technical intro fellowship curriculum. It’s structured as a 7-week reading group with ~1 hour of reading per week. It’s is based off of BlueDot’s AISF and the two curricula have co-evolved (we exchange ideas with BlueDot ~semesterly); a major difference is that the HAIST curriculum is significantly abridged.
I sometimes post (narrow) reading lists on the forum. Are those actually helpful to anyone?
For what it’s worth, I found your “AI policy ideas: Reading list” and “Ideas for AI labs: Reading list” helpful,[1] and I’ve recommended the former to three or four people. My guess would be that these reading lists have been very helpful to a couple or a few people rather than quite helpful to lots of people, but I’d also guess that’s the right thing to be aiming for given the overall landscape.
Why don’t there exist better reading lists / syllabi, especially beyond introductory stuff?
I expect there’s no good reason for this, and that it’s simply because it’s nobody’s job to make such reading lists (as far as I’m aware), and the few(?) people who could make good intermediate-to-advanced level readings lists either haven’t thought to do so or are too busy doing object-level work?
Helpful in the sense of: I read or skimmed the readings in those lists that I hadn’t already seen, which was maybe half of them, and I think this was probably a better use of my time than the counterfactual.
Because my job is very time-consuming, I haven’t spent much time trying to understand the state of the art in AI risk. If there was a ready-made reading list I could devote 2-3 hours per week to, such that it’d take me a few months to learn the basic context of AI risk, that’d be great.
It takes like 20 hours of focused reading to get basic context on AI risk and threat models. Once you have that, I feel like you can read everything important in x-risk-focused AI policy in 100 hours. Same for x-risk-focused AI corporate governance, AI forecasting, and macrostrategy.
[Edit: read everything important doesn’t mean you have nothing left to learn; it means something like you have context to appreciate ~all papers, and you can follow ~all conversations in the field except between sub-specialists, and you have the generators of good overviews like 12 tentative ideas for US AI policy.]
Am I wrong?
Actually yes, I’m imagining going back and speedrunning learning; if you’re not an expert then you’re much worse at (1) figuring out what to prioritize reading and (2) skimming. But still, 300 hours each, or 200 with a good reading list, or 150 with a great reading list.
This is wild. Normal fields require more like 10,000 hours engagement before you reach the frontier, and much more to read everything important. Right?
Why aren’t more people at the frontier in these four areas?
Normal fields have textbooks and syllabi and lit reviews. Those are awesome for learning quickly. We should have better reading lists. I should make reading lists.
My opportunity cost is high for several weeks; I’ll plan to try this in December. I should be able to make fine 100-hour reading lists on these four topics in 1 day each, or good ones in a week each.
I will be tempted to read too much stuff I haven’t already read. (But I should skim anything I haven’t read in e.g. https://course.aisafetyfundamentals.com/governance.) And I will have the curse of knowledge regarding prerequisites/context and what’s-hard-to-understand. Shrug.
Maybe I can just get someone else to make great reading lists...
Why don’t there exist better reading lists / syllabi, especially beyond introductory stuff?
A reading list will be out of date in 6 months. Hmm. Maybe updating it wouldn’t actually be that hard?
I sometimes post (narrow) reading lists on the forum. Are those actually helpful to anyone? Would they be helpful if they got more attention? I almost never know who uses them. If I did know, talking to those people might be helpful.
If I actually try to make great/canonical AI governance reading lists, I should:
Check out all the existing reading lists: public ones + see private airable + student fellowships on governance (Harvard/MIT/Stanford) + reading lists given to new GovAI fellows or IAPS staff
Ask various people for advice + input + feedback: Mauricio, Michael, Matthijs, David, AISF folks, various slacks; plus experts on various particular topics like “takeoff speed”
Think about target audience. Actually talk to people in potential target audiences.
Maybe relevant: https://www.ai-alignment-flashcards.com/
I don’t know whether alignment is similar. I suspect alignment has a lack of reading lists too.
The lack of lists of (research) project ideas (not to mention research agendas) in AI safety is even worse than the lack of reading lists. Can I fix that?
Talk to Michael and David
Super out of date but see https://forum.effectivealtruism.org/posts/kvkv6779jk6edygug/some-ai-governance-research-ideas and what it links to
[Check out + talk to people who run] some of: ERA, CHERI, PIBBSS, AI safety student groups (Harvard/MIT/Stanford), AISF, SPAR, AI Safety Camp, Alignment Jam, AI Safety Hubs Labs, GovAI fellowship (see private docs “GovAI Fellowship—Research project ideas” and “GovAI Summer Fellowship Handbook”), MATS, Astra
Did AI Safety Ideas try and fail to solve this problem? Talk to Esben?
Look for other existing lists (public and private)
Ask various slacks for (lists of) project ideas?
Ask authors of lists on https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions for updated lists.
Ask various relevant researchers & orgs for (lists of) project ideas?
For most AI governance researchers, I don’t know what they’re working on. That’s really costly and feels like it should be cheap to fix. I’m aware of one attempt to fix this; it failed and I don’t understand why.
Related: Research debt.
I disagree-voted because I feel like I’ve done much more than 100-hours of reading on AI Policy (including finishing the AI Safety Fundamentals Governance course) and still have a strong sense there’s a lot I don’t know, and regularly come across new work that I find insightful. Very possibly I’m prioritising reading the wrong things (and would really value a reading list!) but thought I’d share my experience as a data point.
Here are some of the curricula that HAIST uses:
The technical intro fellowship curriculum. It’s structured as a 7-week reading group with ~1 hour of reading per week. It’s is based off of BlueDot’s AISF and the two curricula have co-evolved (we exchange ideas with BlueDot ~semesterly); a major difference is that the HAIST curriculum is significantly abridged.
The policy fellowship syllabus.
The HAIST website also has a resources tab with lists of technical and policy papers.
For what it’s worth, I found your “AI policy ideas: Reading list” and “Ideas for AI labs: Reading list” helpful,[1] and I’ve recommended the former to three or four people. My guess would be that these reading lists have been very helpful to a couple or a few people rather than quite helpful to lots of people, but I’d also guess that’s the right thing to be aiming for given the overall landscape.
I expect there’s no good reason for this, and that it’s simply because it’s nobody’s job to make such reading lists (as far as I’m aware), and the few(?) people who could make good intermediate-to-advanced level readings lists either haven’t thought to do so or are too busy doing object-level work?
Helpful in the sense of: I read or skimmed the readings in those lists that I hadn’t already seen, which was maybe half of them, and I think this was probably a better use of my time than the counterfactual.
+1 to the interest in these reading lists.
Because my job is very time-consuming, I haven’t spent much time trying to understand the state of the art in AI risk. If there was a ready-made reading list I could devote 2-3 hours per week to, such that it’d take me a few months to learn the basic context of AI risk, that’d be great.