The main types of value I was getting from the course were:
Accountability for doing the readings
The chance to use wrong terminology / say things that don’t make sense (either when I’m trying to explain something, or when I’m asking a question), and then get corrected (This helped me to develop a more coherent model of what’s going on and catch unknown unknowns (at least by transforming them into known unknowns).)
Other resources: links to other readings and explanations
Corrections and clarifications during the sessions
Personal lessons for next time:
Set aside time to do the readings >24 hours in advance, and send in questions early
This was great. The thing I’d do was, if I was reading something that I was having a hard time following, I’d give Claude some context, then say something like, “I’ll now explain this in terms that I understand, or with an analogy or visualization that makes sense to me. Please correct me where I’m misusing terms or saying something wrong.” Claude would generally be over-positive and would miss some things, but I’d often get a more technical restatement of what I was trying to say, and this helped me a lot. This was relatively introductory material that wasn’t specific to AI safety, so I think Claude was actually performing pretty well.
Set up a notes and questions doc for myself from the beginning
I now have a messy doc with notes from the different weeks, further readings that we were recommended, and short summaries — I find it quite useful, but the first few weeks are missing because I just hadn’t set it up, and was just scribbling down some questions as I read. Having a doc in advance would have been great and would have prompted me to use it.
Actually read the little intro notes for each week on the AGISF website
I think they would have been helpful context for the readings, but I was often skipping them (especially at the beginning), as I didn’t really see them as part of the reading.
I spent a fair amount of time on this and endorse doing that.
I think the more work I was putting in, the more value I was getting out of the course (there weren’t diminishing returns). I was spending probably around 2-5 hours of time before the session each week (there might have been a week when I spent less than 2 hours, and the median time was probably closer to something like 2-3) — in higher-effort weeks, I would explore related writing or referenced texts, and try to get to the point where I could notice the potential weaknesses and confusions I had about the assigned readings. I should flag that I wouldn’t always do all the readings very carefully.
Having some amount of context before starting the course could be useful. If you’re not familiar with the overall shape of the argument for existential risk from AI, I imagine that the course structure might be a bit jarring; I moved cohorts a bit in the beginning, but I did feel a bit like it began somewhat abruptly.
Minor points/asides:
It was just pretty fun.
I have notes on readings I found more and less helpful, and will try to pass those on at some point.
Facilitators probably matter a lot.
I enjoyed the readings for some weeks a lot more than I enjoyed them for other weeks.
I was worried that approaches seemed outdated, in some cases.
I still have pretty broad confusions that I hope to resolve.
I’m quite glad I took the course!
Quick takes:
The main types of value I was getting from the course were:
Accountability for doing the readings
The chance to use wrong terminology / say things that don’t make sense (either when I’m trying to explain something, or when I’m asking a question), and then get corrected (This helped me to develop a more coherent model of what’s going on and catch unknown unknowns (at least by transforming them into known unknowns).)
Other resources: links to other readings and explanations
Corrections and clarifications during the sessions
Personal lessons for next time:
Set aside time to do the readings >24 hours in advance, and send in questions early
(Agree with the facilitator about doing this!)
Use Claude[1] as a personal tutor from the start
This was great. The thing I’d do was, if I was reading something that I was having a hard time following, I’d give Claude some context, then say something like, “I’ll now explain this in terms that I understand, or with an analogy or visualization that makes sense to me. Please correct me where I’m misusing terms or saying something wrong.” Claude would generally be over-positive and would miss some things, but I’d often get a more technical restatement of what I was trying to say, and this helped me a lot. This was relatively introductory material that wasn’t specific to AI safety, so I think Claude was actually performing pretty well.
Set up a notes and questions doc for myself from the beginning
I now have a messy doc with notes from the different weeks, further readings that we were recommended, and short summaries — I find it quite useful, but the first few weeks are missing because I just hadn’t set it up, and was just scribbling down some questions as I read. Having a doc in advance would have been great and would have prompted me to use it.
Actually read the little intro notes for each week on the AGISF website
I think they would have been helpful context for the readings, but I was often skipping them (especially at the beginning), as I didn’t really see them as part of the reading.
I spent a fair amount of time on this and endorse doing that.
I think the more work I was putting in, the more value I was getting out of the course (there weren’t diminishing returns). I was spending probably around 2-5 hours of time before the session each week (there might have been a week when I spent less than 2 hours, and the median time was probably closer to something like 2-3) — in higher-effort weeks, I would explore related writing or referenced texts, and try to get to the point where I could notice the potential weaknesses and confusions I had about the assigned readings. I should flag that I wouldn’t always do all the readings very carefully.
Having some amount of context before starting the course could be useful. If you’re not familiar with the overall shape of the argument for existential risk from AI, I imagine that the course structure might be a bit jarring; I moved cohorts a bit in the beginning, but I did feel a bit like it began somewhat abruptly.
Minor points/asides:
It was just pretty fun.
I have notes on readings I found more and less helpful, and will try to pass those on at some point.
Facilitators probably matter a lot.
I enjoyed the readings for some weeks a lot more than I enjoyed them for other weeks.
I was worried that approaches seemed outdated, in some cases.
I still have pretty broad confusions that I hope to resolve.
I was using Claude for dumb reasons — I don’t have a strong sense for how it compares to GPT-4 on this.