Thank you for the post, as a new uni group organizer Iâll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an âintroâ fellowship but the program discusses longtermism/âx-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes
Maybe If we slow down the creation of âai safety peopleâ by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.
I disagree-voted and briefly wanted to explain why.
âsome people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.â
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I donât think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the âlocal expertâ model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different âexpertsâ spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
âAnd if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.â
I think assumes that people wonât be put off by the weirdness by, letâs say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I donât know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they donât have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I donât think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I donât think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance,why AI alignment could be hard with modern deep learning, but they donât need to read the 80K career profile on Safety if they donât want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).
It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/âlongtermist/âAI jobs on 80K or the EA Opportunity board and question why those topics werenât introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/âupdating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that âhey, you can also work for the government, help shape policy on bio-risks, and have a huge impactâ is an important one not to leave out.
Thank you for the post, as a new uni group organizer Iâll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an âintroâ fellowship but the program discusses longtermism/âx-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes
Maybe If we slow down the creation of âai safety peopleâ by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.
I disagree-voted and briefly wanted to explain why.
âsome people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.â
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I donât think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the âlocal expertâ model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different âexpertsâ spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
âAnd if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.â
I think assumes that people wonât be put off by the weirdness by, letâs say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I donât know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they donât have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I donât think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I donât think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance, why AI alignment could be hard with modern deep learning, but they donât need to read the 80K career profile on Safety if they donât want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).
It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/âlongtermist/âAI jobs on 80K or the EA Opportunity board and question why those topics werenât introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/âupdating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that âhey, you can also work for the government, help shape policy on bio-risks, and have a huge impactâ is an important one not to leave out.