Thank you for taking the time to write this. In 2020, I had the opportunity to start a city group or a university group in Cyprus, given the resources and connections at my disposal. After thinking long and hard about the homogenization of the group towards a certain cause area, I opted not to, but rather focused on being a facilitator for the virtual program, where I believe I will have more impact by introducing EA to newcomers from a more nuanced perspective. Facilitators of the virtual program have the ability to maintain a perfect balance between cause areas and the inability of certain cause areas to dominate due to the lack of any external pressure. I find it a much better use of my time and efforts. With no social hierarchy or community to impress in the virtual program, it is often clear and easy for people to defend their epistemic beliefs and cause area prioritization without any external factors or social pressure. I also try my best to mention to cohorts the need for personal fit in choosing cause areas, not just on the basis of groupthink or potential social currency. Perhaps this is where university fellowships and the virtual program diverge (no external pressure). Cohorts are often nudged to think deeply about positions and ideas that they hold without feeling any pressure. Some end up working on AI safety; others end up working on different areas such as climate change and biorisk; and some end up joining CE to develop new charity ideas. I find this highly fulfilling. I don’t necessarily think the syllabus from CEA nudges people towards AI safety (often times it serves as a good resource for cohorts who joined the program through animal warfare, global health, and poverty) to learn about other EA cause areas and also compare their epistemics and personal fit on how to tackle pressing issues. I do often get asked tough questions about why the community seems too focused on AI safety, especially from cohorts who I have nudged to attend EAGX and EAGs. I do point out the massive nuances in funding for different areas within EA because it is far too easy for some cohorts to develop the idea that EA is one huge AI-based movement.
Given the different dynamics between the virtual program and university groups, I do sympathize with you.
Thank you for writing this. It puts some of the questions I get asked during the fellowship into a better perspective, especially coming from a university organizer.
Thank you for taking the time to write this. In 2020, I had the opportunity to start a city group or a university group in Cyprus, given the resources and connections at my disposal. After thinking long and hard about the homogenization of the group towards a certain cause area, I opted not to, but rather focused on being a facilitator for the virtual program, where I believe I will have more impact by introducing EA to newcomers from a more nuanced perspective. Facilitators of the virtual program have the ability to maintain a perfect balance between cause areas and the inability of certain cause areas to dominate due to the lack of any external pressure. I find it a much better use of my time and efforts. With no social hierarchy or community to impress in the virtual program, it is often clear and easy for people to defend their epistemic beliefs and cause area prioritization without any external factors or social pressure. I also try my best to mention to cohorts the need for personal fit in choosing cause areas, not just on the basis of groupthink or potential social currency. Perhaps this is where university fellowships and the virtual program diverge (no external pressure). Cohorts are often nudged to think deeply about positions and ideas that they hold without feeling any pressure. Some end up working on AI safety; others end up working on different areas such as climate change and biorisk; and some end up joining CE to develop new charity ideas. I find this highly fulfilling. I don’t necessarily think the syllabus from CEA nudges people towards AI safety (often times it serves as a good resource for cohorts who joined the program through animal warfare, global health, and poverty) to learn about other EA cause areas and also compare their epistemics and personal fit on how to tackle pressing issues. I do often get asked tough questions about why the community seems too focused on AI safety, especially from cohorts who I have nudged to attend EAGX and EAGs. I do point out the massive nuances in funding for different areas within EA because it is far too easy for some cohorts to develop the idea that EA is one huge AI-based movement.
Given the different dynamics between the virtual program and university groups, I do sympathize with you.
Thank you for writing this. It puts some of the questions I get asked during the fellowship into a better perspective, especially coming from a university organizer.