I’m not sure what all of the participants’ motivation was for joining (I should’ve gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).
I’m not sure what all of the participants’ motivation was for joining (I should’ve gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).
that’s really helpful, thank you!