Thank you for the post, as a new uni group organizer I’ll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an “intro” fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes
Maybe If we slow down the creation of “ai safety people” by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.
I disagree-voted and briefly wanted to explain why.
“some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.”
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I don’t think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the “local expert” model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different “experts” spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
“And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.”
I think assumes that people won’t be put off by the weirdness by, let’s say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I don’t know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they don’t have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I don’t think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I don’t think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance,why AI alignment could be hard with modern deep learning, but they don’t need to read the 80K career profile on Safety if they don’t want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).
It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/longtermist/AI jobs on 80K or the EA Opportunity board and question why those topics weren’t introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/updating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that “hey, you can also work for the government, help shape policy on bio-risks, and have a huge impact” is an important one not to leave out.
Thank you for the post, as a new uni group organizer I’ll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an “intro” fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes
Maybe If we slow down the creation of “ai safety people” by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.
I disagree-voted and briefly wanted to explain why.
“some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.”
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I don’t think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the “local expert” model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different “experts” spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
“And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.”
I think assumes that people won’t be put off by the weirdness by, let’s say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I don’t know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they don’t have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I don’t think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I don’t think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance, why AI alignment could be hard with modern deep learning, but they don’t need to read the 80K career profile on Safety if they don’t want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).
It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/longtermist/AI jobs on 80K or the EA Opportunity board and question why those topics weren’t introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/updating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that “hey, you can also work for the government, help shape policy on bio-risks, and have a huge impact” is an important one not to leave out.