Thank you, seems like an awesome 3 days. Would you be able to share a little more on the participants’ motivation, either their own or how you motivated them? I am trying to encourage a local group to try similar short but somewhat intense sprints as a low stakes attempt to increase familiarity and confidence with different topics, or at least encourage interest and breadth of knowledge. What worked well for you?
I’m not sure what all of the participants’ motivation was for joining (I should’ve gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).
Thank you, seems like an awesome 3 days. Would you be able to share a little more on the participants’ motivation, either their own or how you motivated them? I am trying to encourage a local group to try similar short but somewhat intense sprints as a low stakes attempt to increase familiarity and confidence with different topics, or at least encourage interest and breadth of knowledge. What worked well for you?
I’m not sure what all of the participants’ motivation was for joining (I should’ve gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).
that’s really helpful, thank you!