To the extent that the program is meant to provide an introduction to “catastrophic and existential risk reduction in the context of AI/ML”, I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think the AI Safety Fundamentals AI Governance Program has some good readings for this.
Thanks for your comment. Question—Do you think it’s worth introducing X risk (+ re areas) in this context? I ask this because we envision this reading group as a lead-in to an intro fellowship or other avenues of early stage involvement. Given this, we want to balance materials we introduce with limited time, while also making people curious about ideas discussed in the EA space.
My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it’s valuable to have content you think is important in your initial program instead of hoping that they’ll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.
To the extent that the program is meant to provide an introduction to “catastrophic and existential risk reduction in the context of AI/ML”, I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think the AI Safety Fundamentals AI Governance Program has some good readings for this.
Thanks for your comment. Question—Do you think it’s worth introducing X risk (+ re areas) in this context? I ask this because we envision this reading group as a lead-in to an intro fellowship or other avenues of early stage involvement. Given this, we want to balance materials we introduce with limited time, while also making people curious about ideas discussed in the EA space.
My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it’s valuable to have content you think is important in your initial program instead of hoping that they’ll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.