1-year update on impactRIO, the first AI Safety group in Brazil
This post may include my own personal perspective as president, not representing other organisers’ opinions. Special thanks to David Solar and Zoé Roy-Stang for their valuable feedback on the draft.
TL;DR
There are important universities in Rio de Janeiro that offer undergraduate, master and doctorate programs in AI and adjacent areas for some of the most talented students in the country.
We can list only about 5 Brazilian researchers who work on AI safety, and we’re the first AI safety group in Brazil[1], so we face challenges like promoting and engaging AI safety from scratch and having members network with more experienced people.
impactRIO has offered 3 fellowships (Governance, Alignment, and Advanced Alignment) based on AI Safety Fundamentals courses.
I estimate that the group has 8 highly-engaged people among members and organisers.
Organisers and members sum at least 27 attendances in international EA and AI safety events.
Impactful outcomes have been noticed and reported, including 7 warm impact stories and 9 people changing their career plans to mitigate AI risks.
I summarise some lessons learned regarding logistics, methodology, bureaucracy and engagement.
There are many uncertainties regarding the future of the group, but we foresee a huge opportunity for the AI safety community to grow and thrive in this city.
Overview
This post is aimed at those doing community building and those interested in having more context on the AI safety community in Rio de Janeiro, Brazil. I expect the section about lessons learned to be the most valuable for the former and the section about impactful outcomes to be the most valuable for the latter.
impactRIO is an AI safety group in Rio de Janeiro, Brazil. The group was founded in July 2023 with the support of UGAP, Condor Initiative, EA Brazil, and during the last semester we had the support of OSP AIS (now named FSP). We remain as an unofficial club at a university that has very ambitious plans to become an AI hub in Latin America. Last month the university expressed disagreement with the existence of our group and stopped giving us any kind of support.
There are at least 2 projects which fully fund students to live in Rio. Usually, they’re from many different parts of the country and were awarded with medals at scientific olympiads. Therefore, Rio has probably the biggest talent pool in Brazil and you can find very bright students around.
Given all this context, we decided to mainly focus on AI safety because we would likely have more students engage with the group, and we could shed light on AI safety for researchers and professors, so we had a larger expected impact this way.
26 people completed at least one fellowship offered. Only around 7 of them are seniors or graduates. The vast majority are currently sophomores. We estimate that 8 people among organisers and members are taking significant actions motivated by EA principles. We had 2 guest speakers who work full time with safety-related research, and they enriched the content and inspired participants!
What we’ve been doing
We can list only about 5 Brazilian researchers who work full time on AI safety and there weren’t any other students aware of AI safety at our university before we founded the group. So, our group faces challenges like promoting and engaging AI safety from scratch and having members network with more experienced people. Given this context, it was natural to offer introductory fellowships and meetups.
In the second semester of 2023, we started the AI Safety Fellowship by running an Alignment cohort based on BlueDot’s Alignment curriculum since we had a stronger technical background and little governance knowledge. Each of the 10 sessions was designed to last 1 hour, and most of them were grouped in pairs because of a tight schedule, so we usually had 2-hour sessions about 2 different subjects. We agree this was bad, but unfortunately, it was out of our control since our spring term is shorter and has many holidays.
In the first semester of 2024, we continued offering the AI Safety Fellowship by running both an Advanced Alignment cohort based on BlueDot’s Alignment 201 curriculum and a Governance cohort based on BlueDot’s Governance curriculum, since both the organising team and the number of interested people had grown. Each cohort happened in 10 sessions, each designed to last 1.5 hours. One strategy that worked really well was to have a 30-minute break between Governance and Advanced Alignment sessions, so we only had to set the snack table once, and all fellows had the chance to meet each other and chat casually since only 4 people were attending both cohorts.
During both semesters, we also ran EA meetups in the city of Rio de Janeiro with the support of EA Brazil. We tried to run monthly lightning talks to help existent EAs in the city meet each other and introduce new people to EA ideas and principles, just like is done in São Paulo. Sometimes, we had to cancel because of extreme climate conditions or because we struggled to find people willing to attend or deliver a talk. Despite our main focus being AI safety, we were in a very good position to try to engage the local EA community with minimum marginal effort. Unfortunately, we don’t have enough feedback data to evaluate the impact of these meetups, but I estimate they created or fostered 78 connections.
Impactful outcomes
We asked for feedback after every fellowship session we ran so we could test strategies and keep improving. For each fellowship as a whole, the members fill out an onboarding survey and a final survey. Some questions are exactly the same, and we got huge value by comparing their answers before and after. We changed the surveys, so we won’t have some data for all 3 fellowships.
Here is all the in-depth data on impact, which I’ll distil in the next section:
9 people changed their career plans to impactfully contribute to AI safety.
2 people had their AI safety career plans accelerated.
Impact stories:
“It was an intense journey of growth and self-discovery. From day one, I was challenged to expand my knowledge (before I had nearly 0 knowledge about AI) and to reflect deeply on the ethical impacts of artificial intelligence. Throughout the fellowship, I realised not only my own growth, but also the crucial importance of my role in building a safe and ethical future for AI. The experience was an inspiration for my personal and professional journey, reinforcing my commitment to the responsible development of technology.”
“For me, this fellowship was a unique experience that made me clearly understand what the world is really experiencing today in relation to AI. I’m learning about “building” an AI now, I’m just at the beginning of the journey, but thinking about the danger was of great value in learning how to do things in the best way. The fellowship made me want to study AI safety more, it made me discover opportunities abroad and realise that the world really needs us to think, study and take care of things that many don’t even know exactly how they work and have no idea of the risks.”
“The Fellowship was a great opportunity for me to learn more about a subject I’m interested in in a guided way (which makes everything faster and more interesting). My interest in the subject grew even more. Having discovered opportunities (through notices in groups/during the Fellowship) in the area was also very good! I feel that now I am, at least a little, more capable to look for more opportunities and start to actually take action and help with the Safety field.”
“Surprising. I expected it to be something more passive, in the sense of having more readings and lectures, but I was very positively surprised by the quantity and quality of the dynamics and forms of engagement. Discussions were always open to all opinions so that everyone could collaborate and learn together. This made a total difference, as I believe it is the quickest and most effective way to understand and really think about the topics covered.”
“It was very special. I loved the valuable connections I made, the learning throughout the process, the opportunity to discover and study something new and outside of my routine, and to have a new career perspective. I felt it was extremely valuable for my personal and professional growth. It made me rethink a lot of things about the world and about myself, and about what I want for the future. Also, it was so much fun! I felt very welcomed and comfortable among such dear people. I’ll be there in the next ones!”
“It was an interesting experience to do it in the 2nd semester of my Applied Mathematics undergraduate course, as it presented me with a broader perspective of what Artificial Intelligence is and the most current problems this area faces. Having organisers engaged with the topic also facilitated the translation of the content and certain technical terms into a language that was more accessible to everyone, in general. I feel that having people from other courses participating also enriched the conversations at the same time as it made me see the recent events of the AI boom in a different way—much more realistic and also urgent.”
“It restored in me the desire I had to have an impactful career! I had left that a little aside with the college routine, but the fellowship made me remember and rethink my future plans.”
How they evaluate the use of their time:
Governance:
4 reported > 10x the counterfactual
2 reported 3x − 10x the counterfactual
3 reported 1x − 3x the counterfactual
1 reported 50% − 100% the counterfactual
Advanced Alignment:
4 reported > 10x the counterfactual
3 reported 3x − 10x the counterfactual
1 reported 1x − 3x the counterfactual
1 reported the same as the counterfactual
Average overall satisfaction:
Governance: 9.7/10
Advanced Alignment: 9.3/10
Average rate on how they felt socially:
Governance and/or Advanced Alignment: 9.6/10
Average rate on overall productivity:
Alignment: 9.5/10
Governance and/or Advanced Alignment: 9.3/10
Average rate on 1-on-1s:
Alignment: 9.8/10
Members reported as most valuable:
Learn to hear and consider divergent opinions.
Train (fast) presentation skills.
Good overview of governance, alignment, and possible approaches.
Thoughtfully interact with people from other backgrounds.
Discussions and presentations pulled them out of their comfort zone.
Create an environment for discussions with like-minded people.
Organisers were very dedicated to keep high quality and engagement.
The content helped them with research on AI ethics.
Insights on how AI development can affect socio-economic dynamics.
Real sense of being connected and part of a community.
Understand how to find and navigate the resources on AI safety.
Deep reflections on possible capabilities of future AIs.
Discover concrete cutting-edge research projects and directions.
Motivated them to pursue a career in AI safety.
Members reported as least valuable:
Language barrier.
Low engagement of a few participants.
Fast pace.
Short time for individual thinking.
Too many and too long pre-readings.
Missing technical AI background for governance discussions.
Not enough content exposure/explanation before group work.
Publicly sharing essays written in very little time.
Opportunities organisers and members applied or attended[2]:
1 attended CaMLAB.
1 attended the Athena Program.
1 attended UGOR.
11 applied for Condor Camp, and 5 were successful.
We did a broader outreach, and 6 other students got accepted.
1 invited to be a mentor and 1 invited to be a facilitator.
5 applied for Global Challenges Project Workshops and were successful.
10 attended EA meetups[3].
2 completed the Career Planning Program.
9 applied for EA Global, and 3 were successful.
10 applied for EA Brazil Summit, and all were successful.
1 invited to deliver a workshop on university community building.
5 applied for AI safety research fellowships.
8 applied for ML4GOOD, and 3 were successful.
We did a broader outreach, and 2 other students got accepted.
1 invited to be a TA.
5 concluded the EA intro course.
If you would like to know more about our impact measurements and get access to anonymized survey/feedback data, please send me an email at jlduim@gmail.com.
Lessons learned
Bureaucracy stuff needs very careful attention. We actually did our best to make sure everything was right on our end, but we failed in assuming the same on the university’s end. Universities are very complex, and they may not internally forward information to important stakeholders. From now on, for whatever kind of authorization we need, we’ll always make sure their whole staff are looped in.
There is a noticeable lack of engagement with assigned readings. To mitigate this, increasing reminders and providing summaries could encourage more participants to prepare adequately. We also noticed that accessing resources can be confusing due to multiple platforms being used and mentioned, so we decided to streamline these into a few reliable channels, like a centralised drive and notion pages, to improve accessibility.
Facilitating technical study sessions presents unique difficulties, contrasting with our training, which predominantly focuses on moderating discussions rather than technical content. Additionally, there is a need for experts on specific topics where current facilitators lack depth, suggesting that finding more guest speakers would be very beneficial. As we were in this position, we decided to test diverse methodologies during sessions, such as in-session tasks and varying the types of dynamics, which can enhance engagement and retention.
Group dynamics and interaction strategies also require refinement. While splitting into groups for activities is seen as time-consuming and occasionally confusing, the interaction within small groups is highly valued. Participants appreciate sharing their insights across groups, although this can be intimidating for some. Enhancing the structure and guidance for group activities could alleviate these issues. Furthermore, incorporating more career-oriented discussions and philosophical questions could significantly impact participants’ engagement and personal reflection on the topics discussed.
Social events are crucial for building community and networking among participants. However, the choice of venue and the format of social events can greatly influence their success. Restaurants, which restrict movement and interaction, are less effective compared to more dynamic settings where people can easily mingle. Introducing structured activities during social events could also help introverted participants feel more comfortable and engaged.
A session duration of 1.5 hours has been identified as optimal, balancing depth with breadth. Also, fostering a better understanding of the foundational principles of the content among all attendees is crucial, as evidenced by feedback indicating a few participants are sometimes unclear about the core concepts, which negatively impacts group discussions.
Future plans
First of all, to ensure the group will continue existing, we need to either have an authorization by our university or to spin-off as an independent group. I intend to update the post when we have a decision! In spite of this situation, we’ve already discussed many promising future activities given our context, and this might help other AI safety groups plan their activities.
I believe we’re losing much impact potential by not running research and more hands-on activities. We can list dozens of talented, altruistic, truth-seeking individuals who are already doing notable upskilling and research on AI modelling and capabilities, but it’s harder to convince these people to pivot to safety if we’re not showing them a clear pipeline, and they won’t find an advisor at the university who’s doing safety research. Therefore, engaging an AI safety research group will likely produce huge value, and we’ll prioritise this approach.
Last year, our university hosted an EA Brazil event that discussed impactful careers. We had the honour of having the Internship and Career Development Centre team from our university attend the event. They were blown away by some of 80,000 Hours’ insights and were willing to cooperate and incorporate impact into their framework. We truly believe running career workshops based on EA principles would be extremely impactful and engaging.
Many students at our university are actively looking for hackathon opportunities and we should already have done it. Given the talent pool in the city, an Alignment and Governance hackathon could be crazily successful, helping identify more talented people willing to enter the field. Also, if it goes really well, it will catch the attention of senior researchers who would be willing to pivot to safety and advise younger people! I think the main reason why we haven’t run a hackathon before is that we have high expectations and want to make sure to have enough time to organise it and don’t mess it up.
We’re also hoping to catch the attention of professors and researchers in universities and think tanks for AI safety discussions and projects, and our goal is to include related subjects in at least 1 undergraduate curriculum and have AI safety projects being officially developed in the city.
We collected a survey on how we could best support members in the following semesters, and here are their ideas:
Run more one-off talks about varied AI safety and EA topics.
Produce content regularly, like LinkedIn posts and a newsletter.
Offer more networking opportunities with AI safety professionals.
Offer mentorship for highly engaged members.
Systematically communicate all opportunities with strategic reminders.
Keep on offering fellowships.
Run coworking sessions to apply for opportunities.
Help members stay better informed about the latest AI safety news.
We were intending to offer some of these, like one-off talks, this semester but couldn’t. Others, like an AI safety newsletter, are already done by other initiatives, and it may not be the best strategy for us to focus on them.
I’m very excited to see how this group will help the AI safety community grow and thrive in Rio de Janeiro and inspire other groups in Brazil!
^ALAI USP was founded at the same time, but they seem to be currently inactive.
^These numbers are actually a lower bound because people sometimes forget to mention some opportunities.
^This number only includes EA meetups attendees who completed at least 1 fellowship.
Amazing well done! Thank you for sharing your progress, it was helpful :) Similar AI safety Blue Dot spin-offs are happening in New Zealand and Australia.
Amazing, thank you for letting me know! I’m happy it was helpful!