I’m really glad you chose to make this post and I’m grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I’m happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.
Adding to the solutions outlined above, some ideas I have:
• Normalize asking people, “What is the strongest counterargument to the claim you just made?” I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it’s the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. “What are some reasons someone who knows everything you do might not choose to prioritize AI?”
• Greater support for uni students trialing projects through their club, including projects spanning cause areas. You can build skills that cross cause areas while testing your fit and achieving meaningful outcomes in the short-term. Campaign for institutional meat reduction in your school cafeteria and you’ll develop valuable skills for AI governance work as a professional.
• Mentorship programs that match uni students with professionals. There are many mentorship programs to model this on and most have managed to avoid any nefariousness or cult vibes.
• Restructuring fellowships such that they maintain the copy-paste element that has allowed them to spread while focusing more on tools that can be implemented across domains. I like the suggestion of a writing fellowship. I’m personally hoping to create a fellowship focused on social movement theory and advocacy (hit me up if interested in helping!).
I remember speaking with a few people that were employed doing AI-type EA work (people who appear to have fully devoted their careers to the mainstream narrative of EA-style longtermism). I was a bit surprised that when I asked them “What are the strongest arguments against longtermism” none were able to provide much of an answer. I was perplexed that people who had decided to devote their careers (and lives?) to this particular cause area weren’t able to clearly articulate the main weaknesses/problems.
Part of me interpreted this as “Yeah, that makes sense. I wouldn’t be able to speak about strong arguments against gravity or evolution either, because it seems so clear that this particular framework is correct.” But I also feel some concern if the strongest counterargument is something fairly weak, such as “too many white men” or “what if we should discount future people.”
I’m really glad you chose to make this post and I’m grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I’m happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.
Adding to the solutions outlined above, some ideas I have:
• Normalize asking people, “What is the strongest counterargument to the claim you just made?” I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it’s the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. “What are some reasons someone who knows everything you do might not choose to prioritize AI?” • Greater support for uni students trialing projects through their club, including projects spanning cause areas. You can build skills that cross cause areas while testing your fit and achieving meaningful outcomes in the short-term. Campaign for institutional meat reduction in your school cafeteria and you’ll develop valuable skills for AI governance work as a professional. • Mentorship programs that match uni students with professionals. There are many mentorship programs to model this on and most have managed to avoid any nefariousness or cult vibes. • Restructuring fellowships such that they maintain the copy-paste element that has allowed them to spread while focusing more on tools that can be implemented across domains. I like the suggestion of a writing fellowship. I’m personally hoping to create a fellowship focused on social movement theory and advocacy (hit me up if interested in helping!).
I remember speaking with a few people that were employed doing AI-type EA work (people who appear to have fully devoted their careers to the mainstream narrative of EA-style longtermism). I was a bit surprised that when I asked them “What are the strongest arguments against longtermism” none were able to provide much of an answer. I was perplexed that people who had decided to devote their careers (and lives?) to this particular cause area weren’t able to clearly articulate the main weaknesses/problems.
Part of me interpreted this as “Yeah, that makes sense. I wouldn’t be able to speak about strong arguments against gravity or evolution either, because it seems so clear that this particular framework is correct.” But I also feel some concern if the strongest counterargument is something fairly weak, such as “too many white men” or “what if we should discount future people.”