I recently helped start an AI Safety group (together with @Eitan) at the University of Buenos Aires, so this is incredibly useful—thank you!
A few observations from our experience:
Mailing lists have been great for outreach. We were lucky to have a faculty member who was excited about our initiative. They sent emails on our behalf, vouching for us, and have attended most of our sessions.
We also presented an AI Safety paper at an AI paper club and handed out flyers with a QR code to join our Telegram group at AI-related classes. This approach had moderate success.
In hindsight, we should have created a well-formatted Notion page with all the details and an FAQ earlier.
We’ve noticed that most participants read the material before class, but almost no one does the exercises. To address this, we now split the group at the start of each session: one group for those who completed the material and another for those who didn’t. The first group dives into open questions, while the second gets a brief summary.
Initially, we didn’t emphasize catastrophic risks too much. We framed it as an exploration of the risks and transformative impacts of AI on society. The course materials naturally introduced these heavier topics, so it was more of a gradual realization for participants. Interestingly, one of our regular attendees, a physics professor, came to learn about reducing bias in her research, only to discover AI Safety was much bigger than she expected.
I recently helped start an AI Safety group (together with @Eitan) at the University of Buenos Aires, so this is incredibly useful—thank you!
A few observations from our experience:
Mailing lists have been great for outreach. We were lucky to have a faculty member who was excited about our initiative. They sent emails on our behalf, vouching for us, and have attended most of our sessions.
We also presented an AI Safety paper at an AI paper club and handed out flyers with a QR code to join our Telegram group at AI-related classes. This approach had moderate success.
In hindsight, we should have created a well-formatted Notion page with all the details and an FAQ earlier.
We’ve noticed that most participants read the material before class, but almost no one does the exercises. To address this, we now split the group at the start of each session: one group for those who completed the material and another for those who didn’t. The first group dives into open questions, while the second gets a brief summary.
Initially, we didn’t emphasize catastrophic risks too much. We framed it as an exploration of the risks and transformative impacts of AI on society. The course materials naturally introduced these heavier topics, so it was more of a gradual realization for participants. Interestingly, one of our regular attendees, a physics professor, came to learn about reducing bias in her research, only to discover AI Safety was much bigger than she expected.