EA Claremont ran an Intro Fellowship/Seminar over winter break, condensed into 3.5 weeks. It was successful, and we recommend other university groups try this too if they have organizer/facilitator capacity.
We tried a scheduling system where participants could choose which meeting they came to each time, and this went fairly well, but it might not do as well when classes are in session.
For the Meeting 2 exercises participants briefly read about a key concept and wrote a 2 sentence explanation, which they shared with the group. This was a good way to increase the activity level of these programs and an opportunity for participants to teach each other.
A “read 1 of these 3 articles” format worked well in Meetings 4 and 6, and I recommend incorporating it into more curriculums.
My questions: What are your favorite 10-20 min readings about longtermism that can supplement the current offerings? Is there a better 20-30 min AI safety reading to include in these programs? Is there anybody working on organizing/analyzing all the “experiments” different EA groups run? – if not, CEA should hire somebody to do this ASAP.
Logistics
I (on behalf of EA Claremont) organized and facilitated (with some substitutes) an Intro EA Fellowship over our schools’ Winter Break. We condensed the 8 week EAVP program into 7 meetings over 3.5 weeks. This meant combining Existential Risk and Emerging Technology into one meeting. Our full syllabus is available here. The program met over Zoom and there were not specific cohorts.
Instead of specific cohorts, I set about 4 meeting A times and 4 meeting B times per week, and participants came to one of each, and could switch which meeting they came to based on their availability that week. This worked surprisingly well.
I think this would not work as well when school is in session because folks are busier, but it was a very good system for meeting during break, and it is worth exploring more.
I sorta forgot about Christmas and New Years lol. Attendance at meetings on Christmas Eve/Day and New Years Eve/Day were low; if I were to do this again I would not put meetings on these days.
I was facilitating all meetings except when I had a conflict. This is a big time commitment for a single facilitator (~8-10 hours per week of facilitating), but I did not find it too burdensome, and I think it would work great to have 2 facilitators who each do about 2-3 of each meeting per week (4-6 per week total) and can substitute for each other.
In part I chose this Meeting A / Meeting B system because schedule coordinating is really annoying, especially for meeting twice a week with the same group. For other groups considering twice-weekly meetings, I think this format is better than trying to coordinate everybody’s schedules. You could also assign people to one Meeting A and one Meeting B so they are sorta part of 2 cohorts; we think somebody should try this. The model I used here was helpful because it allowed folks to have flexible schedules so they could easily makeup a meeting by just coming a few hours later.
Originally I offered 5 Meeting As and 5 Meeting Bs for the first week, and then I canceled the meetings with the lowest attendance. This was a good idea, but my communication about this was bad. Some participants were confused as to if a meeting was canceled just that week or for all the weeks. This schedule format is a bit confusing, so clear communication is a must. Using a shared GCal would probably be useful, and I should have done this.
A downside of this system is that participants do not get familiar with a single cohort the way they might otherwise. I don’t put too much weight on this argument because I think the current goal of Intro Fellowships (including this one) is mainly around learning content and getting introduced to the community, rather than forming close connections. I could easily be downplaying the importance of these social connections, and I would love to hear counter arguments/data about this – we have to try different things to figure out what the best ways to run Intro Fellowships/Seminars are.
I was the main facilitator for all of the participants. In retrospect, I think it would help to have another facilitator who does 1 of each meeting per week and is available to substitute. I had substitution help from other EA Claremont organizers, but it would probably be best to have 2 “dedicated” facilitators. This would also be good because participants could get more facilitator perspectives or meet with somebody else if they didn’t like me – I don’t think this was a problem.
I converted all the required readings for each meeting into PDFs and put them into folders for each meeting. This way, participants could download a single zip file with all of the required content. I think this went very well, however, in each of these folders I should also have included the section of the syllabus that corresponded to this meeting, because I think a couple participants ended up just doing the readings in the folder without looking at the syllabus (which had exercises, organization highlights, etc.).
Rationale
We decided to run programming over winter break because we have had many students complain that they are interested in learning more about EA, but simply don’t have time during the semester. Generally, students have more free time during breaks, so this solved that problem.
I also thought this would be especially good for younger students; when I was beginning university, I found breaks to be sad, unproductive, and antisocial. It’s cold, none of my friends were around, and I would usually waste my time playing video games or watching TV – which I didn’t find very valuable. I wanted to run this program so other students could retain social connection and keep their minds engaged and learning.
Syllabus Changes
These are generally changes relative to the syllabus used by EAVP.
For each meeting, I included key quotes and key terms for that meeting topic. I also included a link to the exercises in the Required Readings section; I think this increased the number of participants who did the exercises.
I made GiveWell the organization spotlighted for Meeting 1 and Open Phil the org spotlighted for Meeting 2. Not sure participants read these, but it seems useful to include Open Phil given how important it is in the EA space. For Meeting 7 (final one) I included 80k as the org spotlight, even though participants still read the 80k reading.
Meeting 1: I included the “What the program involves” and “How we hope you’ll approach the program” in the required reading, which seems obvious and good if we want people to read them.
Meeting 2: I included a 2 min read about the ITN framework, which I think was a good choice. Dedicating a whole meeting to cause prio isn’t a bad idea, but even just a short reading on the basics was good.
In discussions, I spent 5-15 mins on the ITN framework, which is a decent amount of time, and participants remembered it later in the program which indicates that this was impactful learning.
Also Meeting 2: For the exercises, I had participants briefly estimate their future income, look at their possible impact from donations via GiveWell top charities, and then I tried something new for the third exercise!
I let participants choose which one they wanted to look into, and then I had everybody share their short summary with the group during discussion. This was a method of distributing the teaching of content and a low-effort way for participants to have more responsibility for contributing to the group. I added on to people’s explanations if needed and explained the concepts that nobody shared about.
Somewhat to my surprise, almost everybody actually did this, and it went great! I would recommend it as a model for making Fellowships more active, if only barely.
Assigning which concept each participant read about might have been good, but considering the inconsistent groups it wouldn’t have been perfect.
Due to this exercise and the ITN framework, the Meeting 2 discussion was trying to do a lot. This is not ideal, and would be better split into a couple meetings.
Meeting 3: I changed the exercises a bit. Instead of a letter to a past self (which anecdotally hasn’t been great for my past cohorts), I gave a few questions just prompting participants about what they thought of the reading and how the arguments sit with them. So the exercise included a reflection on the reading and questions about moral choices (deworming vs. animals). I don’t have a sense for if this was better or worse than the original exercises, but personally I prefer it.
Meeting 4: In addition to The Precipice Intro/Ch1 and What we Owe the Future, I asked participants to read one of the following: Against Neutrality About Creating Happy Lives; Why Our Impact in Thousands of Years could be what Most Matters; All Possible Views About Humanity’s Future Are Wild; Why I Find Longtermism Hard, and What Keeps Me Motivated. Almost everybody read the final piece, by Michelle Hutchinson, and they liked it.
Apparently, one reason this was read so much is that my sentence description was enticing: “If longtermism seems really important, but so does the here and now…”
I think the model of having participants read one of a couple articles is promising. It was odd here because everybody read the same one. I doubt these 4 are the best to offer, but I think it’s a good idea to experiment with stuff like this and would love to hear other suggestions for 10-20 min reads that are relevant to longtermism.
I also slightly modified the exercises for this meeting which I think was good.
Meeting 5: This meeting combined Existential Risk and Emerging Technologies. I had to make some cuts to the required reading because of this. Participants read Precipice Ch 2, 5 (pandemics part), Kelsey Piper piece from Vox on AI, and 80k on policy ideas for existential risk.
Looking back, it seems like I cut out material effectively and preserved the main ideas.
In discussion, I asked “Why are some people worried about Artificial General Intelligence, or AGI, as an existential risk?” Based on the responses, I do not think the Vox piece on AI should be the intro material we use, because many participants were confused. I often hear that this piece is a very good intro; in my opinion I have always found it to be fine, but nothing fantastic. I would really love it if our community could iterate on this and create/spread better intro materials for existential risk from AI, specifically. This is maybe something I will work on, but others are better positioned than I to write informed, convincing, and informative pieces about this. AI risk is such a huge problem that we really need great intro materials for it IMO.
Meeting 6: I replaced Pascal’s Mugging with my rewritten version of the story. Anecdotally, participants found the new piece easier to understand (including a couple participants who read both), but it still needs work. I also tried something else that worked super well!
These pieces are less good for summarizing than the key concepts from Meeting 2, as they are already very short.
Because each of these posts is quite short, I am inclined to ask participants to read 2 of them each in the future. Participants really liked the second one (Wiblin) and it’s important, so maybe having everybody read that one and then one of the other two would be good.
Meeting 7: Given that many participants in Uni group programming are undergraduates, I had participants read How to choose a research topic from Effective Theses as a required reading. I do not have a sense of if this was helpful, but it seems like a decent idea to introduce people to Effective Theses.
In the exercises for this meeting I had participants reflect on what they want to come next in their relationship with EA. I think this was slightly useful, especially given that it is something I like to ask about in the final meeting. I also included some suggestions in the syllabus.
Some anecdotal feedback from the last meeting is that the 80k reading is slightly repetitive at times, it hammers home the ITN framework, and many participants have not changed their career plans as a result of the program.
I had participants fill out both an Anonymous and a non-anonymous survey at the final meeting to get feedback on the program, see below.
Post-program (anonymous) data/feedback
26 people applied and all were accepted. 18 people went to the first meeting (or a makeup session I offered). 12 completed the program.
Because of small sample sizes, actual analysis here is fairly useless, but it might give us some sense of what is going on.
Almost everybody who finished the program filled out the anonymous survey at the end, and one person who did not complete the program filled it out.
That person cited personal reasons and confusion about the schedule for them not completing the program – again, I did a poor job communicating scheduling info.
The data below only includes people who finished the program (n = 11)
6⁄11 respondents indicated being unsure of how they wanted to engage with EA more, which seems like a big deal. Including suggestions for this in the syllabus is probably good, but it is clear that I could have done a better job making these avenues clear to participants. Notably, participants filled out the survey at the beginning of the final meeting, and then we spent much of the meeting brainstorming ideas and talking about ways to engage more – so the content of the final meeting might have changed some folks’ plans.
Mean of how positive you feel toward EA (1-10): 8.1
Mean of how much you want to engage more with EA (1-10): 7.2
Mean of how much you understand the basics of EA (1-10): 7.9
Mean of how likely to recommend the program to a friend (1-10): 9.1
Mean of how much you feel like you belong in the EA community (1-10): 7.5
Mean of how you rate the facilitator (1-10): 9.5; however, there was a low outlier whose text description of the facilitator was very positive (so I think they might have made an error in their numerical evaluation). When they are omitted the mean is 9.7!
I think these results are pretty positive and indicate that this program format was at least about as good as other formats – then again I’m not aware of anybody systematically reviewing group experiments. If you know of somebody doing this, let me know!
I had participants fill out an anonymous survey at the beginning of the program, and I hoped to do a pre-post comparison to consider the impact of the program and if there were indicators in a pre-survey that are predictive of participants completing the program. The data ended up being a mess, and there isn’t much of it. This data is garbage in large part because my coding to connect pre and post surveys was bad – I am so unconfident about the data that I am not including my analysis here.
Other
All the Programs
In advertising our winter programs, we were planning on running an Intro Fellowship, Doing Good Better reading group, Precipice reading group, and a program aimed at people who had read DGB with us last semester.
All applicants were accepted to all programs.
The Intro Fellowship went well. Had 26 applicants.
The DGB reading group was canceled after nobody showed up to the meetings. Had 7 applicants.
The Precipice reading group had only 0-4 participants depending on the week. Had 10 applicants.
The Post-DGB fellowship had 1-3 attendees per week. Had 6 applicants. This program was 3 meetings: animals/expanding compassion, longtermism, and existential risk.
Besides the Intro Fellowship, these other programs went fairly poorly attendance wise. I’m not quite sure why this is. One possibility is that our advertising was mainly about the Intro program, and we were more enthusiastic about this which might have caused participants to be more excited. The Intro program was a larger time commitment than the others, so perhaps people with lots of free time signed up for it, whereas those with less time signed up for the others only to have less time than expected. According to a couple people in the other programs, scheduling issues and the time commitment were the reasons they could not participate. The flexible schedule of the Intro program might have helped here too. People were not great about filling out their scheduling availability given that it was a break.
Advertising
We advertised our programs via emails to our email list, a school wide email, tabling outside the dining hall, and talking to friends one on one. Something else that I think was helpful was to individually email people who had come to our events last semester and personally encourage them to apply; in retrospect it probably would have been good to set up an automation for doing this.
Tabling was relatively unsuccessful, in large part because it was finals week. This means other students were busy and didn’t want to talk, and we had limited organizer capacity to do this.
If I were to do this again, I would try to do the majority of advertising maybe 3 weeks before the end of the semester, rather than in the last 2 weeks as I did here.
Another tip
Something that increased Intro Fellowship starting rates was using Mailchimp to send out the acceptance/introduction email. I used email tracking to see who did not open the syllabus links, and individually texted them a reminder. A couple people said this was helpful because they were not checking emails during break.
I am quite uncertain about whether it is ethically acceptable to use email tracking, however trying it here was successful in so far as it helped the attendance of interested people. Due to my uncertainty here, I am unsure if others should do this.
Thanks to James Lucassen for comments on this write up and to James and Mia Taylor for facilitating help
EA Claremont Winter 21/22 Intro Fellowship Retrospective
Summary/Takeaways
EA Claremont ran an Intro Fellowship/Seminar over winter break, condensed into 3.5 weeks. It was successful, and we recommend other university groups try this too if they have organizer/facilitator capacity.
We tried a scheduling system where participants could choose which meeting they came to each time, and this went fairly well, but it might not do as well when classes are in session.
For the Meeting 2 exercises participants briefly read about a key concept and wrote a 2 sentence explanation, which they shared with the group. This was a good way to increase the activity level of these programs and an opportunity for participants to teach each other.
A “read 1 of these 3 articles” format worked well in Meetings 4 and 6, and I recommend incorporating it into more curriculums.
My questions: What are your favorite 10-20 min readings about longtermism that can supplement the current offerings? Is there a better 20-30 min AI safety reading to include in these programs? Is there anybody working on organizing/analyzing all the “experiments” different EA groups run? – if not, CEA should hire somebody to do this ASAP.
Logistics
I (on behalf of EA Claremont) organized and facilitated (with some substitutes) an Intro EA Fellowship over our schools’ Winter Break. We condensed the 8 week EAVP program into 7 meetings over 3.5 weeks. This meant combining Existential Risk and Emerging Technology into one meeting. Our full syllabus is available here. The program met over Zoom and there were not specific cohorts.
Instead of specific cohorts, I set about 4 meeting A times and 4 meeting B times per week, and participants came to one of each, and could switch which meeting they came to based on their availability that week. This worked surprisingly well.
I think this would not work as well when school is in session because folks are busier, but it was a very good system for meeting during break, and it is worth exploring more.
I sorta forgot about Christmas and New Years lol. Attendance at meetings on Christmas Eve/Day and New Years Eve/Day were low; if I were to do this again I would not put meetings on these days.
I was facilitating all meetings except when I had a conflict. This is a big time commitment for a single facilitator (~8-10 hours per week of facilitating), but I did not find it too burdensome, and I think it would work great to have 2 facilitators who each do about 2-3 of each meeting per week (4-6 per week total) and can substitute for each other.
In part I chose this Meeting A / Meeting B system because schedule coordinating is really annoying, especially for meeting twice a week with the same group. For other groups considering twice-weekly meetings, I think this format is better than trying to coordinate everybody’s schedules. You could also assign people to one Meeting A and one Meeting B so they are sorta part of 2 cohorts; we think somebody should try this. The model I used here was helpful because it allowed folks to have flexible schedules so they could easily makeup a meeting by just coming a few hours later.
Originally I offered 5 Meeting As and 5 Meeting Bs for the first week, and then I canceled the meetings with the lowest attendance. This was a good idea, but my communication about this was bad. Some participants were confused as to if a meeting was canceled just that week or for all the weeks. This schedule format is a bit confusing, so clear communication is a must. Using a shared GCal would probably be useful, and I should have done this.
A downside of this system is that participants do not get familiar with a single cohort the way they might otherwise. I don’t put too much weight on this argument because I think the current goal of Intro Fellowships (including this one) is mainly around learning content and getting introduced to the community, rather than forming close connections. I could easily be downplaying the importance of these social connections, and I would love to hear counter arguments/data about this – we have to try different things to figure out what the best ways to run Intro Fellowships/Seminars are.
I was the main facilitator for all of the participants. In retrospect, I think it would help to have another facilitator who does 1 of each meeting per week and is available to substitute. I had substitution help from other EA Claremont organizers, but it would probably be best to have 2 “dedicated” facilitators. This would also be good because participants could get more facilitator perspectives or meet with somebody else if they didn’t like me – I don’t think this was a problem.
I converted all the required readings for each meeting into PDFs and put them into folders for each meeting. This way, participants could download a single zip file with all of the required content. I think this went very well, however, in each of these folders I should also have included the section of the syllabus that corresponded to this meeting, because I think a couple participants ended up just doing the readings in the folder without looking at the syllabus (which had exercises, organization highlights, etc.).
Rationale
We decided to run programming over winter break because we have had many students complain that they are interested in learning more about EA, but simply don’t have time during the semester. Generally, students have more free time during breaks, so this solved that problem.
I also thought this would be especially good for younger students; when I was beginning university, I found breaks to be sad, unproductive, and antisocial. It’s cold, none of my friends were around, and I would usually waste my time playing video games or watching TV – which I didn’t find very valuable. I wanted to run this program so other students could retain social connection and keep their minds engaged and learning.
Syllabus Changes
These are generally changes relative to the syllabus used by EAVP.
For each meeting, I included key quotes and key terms for that meeting topic. I also included a link to the exercises in the Required Readings section; I think this increased the number of participants who did the exercises.
I made GiveWell the organization spotlighted for Meeting 1 and Open Phil the org spotlighted for Meeting 2. Not sure participants read these, but it seems useful to include Open Phil given how important it is in the EA space. For Meeting 7 (final one) I included 80k as the org spotlight, even though participants still read the 80k reading.
Meeting 1: I included the “What the program involves” and “How we hope you’ll approach the program” in the required reading, which seems obvious and good if we want people to read them.
Meeting 2: I included a 2 min read about the ITN framework, which I think was a good choice. Dedicating a whole meeting to cause prio isn’t a bad idea, but even just a short reading on the basics was good.
In discussions, I spent 5-15 mins on the ITN framework, which is a decent amount of time, and participants remembered it later in the program which indicates that this was impactful learning.
Also Meeting 2: For the exercises, I had participants briefly estimate their future income, look at their possible impact from donations via GiveWell top charities, and then I tried something new for the third exercise!
I asked participants to spend 5 minutes reading about and writing a ~2 sentence summary for one of the following concepts: Scope neglect, Expected value, Counterfactual thinking, Diminishing returns, Thinking at the margin, Earning to give
I let participants choose which one they wanted to look into, and then I had everybody share their short summary with the group during discussion. This was a method of distributing the teaching of content and a low-effort way for participants to have more responsibility for contributing to the group. I added on to people’s explanations if needed and explained the concepts that nobody shared about.
Somewhat to my surprise, almost everybody actually did this, and it went great! I would recommend it as a model for making Fellowships more active, if only barely.
Assigning which concept each participant read about might have been good, but considering the inconsistent groups it wouldn’t have been perfect.
Due to this exercise and the ITN framework, the Meeting 2 discussion was trying to do a lot. This is not ideal, and would be better split into a couple meetings.
Meeting 3: I changed the exercises a bit. Instead of a letter to a past self (which anecdotally hasn’t been great for my past cohorts), I gave a few questions just prompting participants about what they thought of the reading and how the arguments sit with them. So the exercise included a reflection on the reading and questions about moral choices (deworming vs. animals). I don’t have a sense for if this was better or worse than the original exercises, but personally I prefer it.
Meeting 4: In addition to The Precipice Intro/Ch1 and What we Owe the Future, I asked participants to read one of the following: Against Neutrality About Creating Happy Lives; Why Our Impact in Thousands of Years could be what Most Matters; All Possible Views About Humanity’s Future Are Wild; Why I Find Longtermism Hard, and What Keeps Me Motivated. Almost everybody read the final piece, by Michelle Hutchinson, and they liked it.
Apparently, one reason this was read so much is that my sentence description was enticing: “If longtermism seems really important, but so does the here and now…”
I think the model of having participants read one of a couple articles is promising. It was odd here because everybody read the same one. I doubt these 4 are the best to offer, but I think it’s a good idea to experiment with stuff like this and would love to hear other suggestions for 10-20 min reads that are relevant to longtermism.
I also slightly modified the exercises for this meeting which I think was good.
Meeting 5: This meeting combined Existential Risk and Emerging Technologies. I had to make some cuts to the required reading because of this. Participants read Precipice Ch 2, 5 (pandemics part), Kelsey Piper piece from Vox on AI, and 80k on policy ideas for existential risk.
Looking back, it seems like I cut out material effectively and preserved the main ideas.
In discussion, I asked “Why are some people worried about Artificial General Intelligence, or AGI, as an existential risk?” Based on the responses, I do not think the Vox piece on AI should be the intro material we use, because many participants were confused. I often hear that this piece is a very good intro; in my opinion I have always found it to be fine, but nothing fantastic. I would really love it if our community could iterate on this and create/spread better intro materials for existential risk from AI, specifically. This is maybe something I will work on, but others are better positioned than I to write informed, convincing, and informative pieces about this. AI risk is such a huge problem that we really need great intro materials for it IMO.
Meeting 6: I replaced Pascal’s Mugging with my rewritten version of the story. Anecdotally, participants found the new piece easier to understand (including a couple participants who read both), but it still needs work. I also tried something else that worked super well!
I had participants read one of the following short posts (like Meeting 2 and 4): How not to be a “white in shining armor”, Disagreeing about what’s effective isn’t disagreeing with effective altruism, The lack of controversy over well targeted aid. This went well, and participants read them and, in sessions where we had enough time, summarized the piece to the group.
These pieces are less good for summarizing than the key concepts from Meeting 2, as they are already very short.
Because each of these posts is quite short, I am inclined to ask participants to read 2 of them each in the future. Participants really liked the second one (Wiblin) and it’s important, so maybe having everybody read that one and then one of the other two would be good.
Meeting 7: Given that many participants in Uni group programming are undergraduates, I had participants read How to choose a research topic from Effective Theses as a required reading. I do not have a sense of if this was helpful, but it seems like a decent idea to introduce people to Effective Theses.
In the exercises for this meeting I had participants reflect on what they want to come next in their relationship with EA. I think this was slightly useful, especially given that it is something I like to ask about in the final meeting. I also included some suggestions in the syllabus.
Some anecdotal feedback from the last meeting is that the 80k reading is slightly repetitive at times, it hammers home the ITN framework, and many participants have not changed their career plans as a result of the program.
I had participants fill out both an Anonymous and a non-anonymous survey at the final meeting to get feedback on the program, see below.
Post-program (anonymous) data/feedback
26 people applied and all were accepted. 18 people went to the first meeting (or a makeup session I offered). 12 completed the program.
Because of small sample sizes, actual analysis here is fairly useless, but it might give us some sense of what is going on.
Almost everybody who finished the program filled out the anonymous survey at the end, and one person who did not complete the program filled it out.
That person cited personal reasons and confusion about the schedule for them not completing the program – again, I did a poor job communicating scheduling info.
The data below only includes people who finished the program (n = 11)
6⁄11 respondents indicated being unsure of how they wanted to engage with EA more, which seems like a big deal. Including suggestions for this in the syllabus is probably good, but it is clear that I could have done a better job making these avenues clear to participants. Notably, participants filled out the survey at the beginning of the final meeting, and then we spent much of the meeting brainstorming ideas and talking about ways to engage more – so the content of the final meeting might have changed some folks’ plans.
Mean of how positive you feel toward EA (1-10): 8.1
Mean of how much you want to engage more with EA (1-10): 7.2
Mean of how much you understand the basics of EA (1-10): 7.9
Mean of how likely to recommend the program to a friend (1-10): 9.1
Mean of how much you feel like you belong in the EA community (1-10): 7.5
Mean of how you rate the facilitator (1-10): 9.5; however, there was a low outlier whose text description of the facilitator was very positive (so I think they might have made an error in their numerical evaluation). When they are omitted the mean is 9.7!
I think these results are pretty positive and indicate that this program format was at least about as good as other formats – then again I’m not aware of anybody systematically reviewing group experiments. If you know of somebody doing this, let me know!
I had participants fill out an anonymous survey at the beginning of the program, and I hoped to do a pre-post comparison to consider the impact of the program and if there were indicators in a pre-survey that are predictive of participants completing the program. The data ended up being a mess, and there isn’t much of it. This data is garbage in large part because my coding to connect pre and post surveys was bad – I am so unconfident about the data that I am not including my analysis here.
Other
All the Programs
In advertising our winter programs, we were planning on running an Intro Fellowship, Doing Good Better reading group, Precipice reading group, and a program aimed at people who had read DGB with us last semester.
All applicants were accepted to all programs.
The Intro Fellowship went well. Had 26 applicants.
The DGB reading group was canceled after nobody showed up to the meetings. Had 7 applicants.
The Precipice reading group had only 0-4 participants depending on the week. Had 10 applicants.
The Post-DGB fellowship had 1-3 attendees per week. Had 6 applicants. This program was 3 meetings: animals/expanding compassion, longtermism, and existential risk.
Besides the Intro Fellowship, these other programs went fairly poorly attendance wise. I’m not quite sure why this is. One possibility is that our advertising was mainly about the Intro program, and we were more enthusiastic about this which might have caused participants to be more excited. The Intro program was a larger time commitment than the others, so perhaps people with lots of free time signed up for it, whereas those with less time signed up for the others only to have less time than expected. According to a couple people in the other programs, scheduling issues and the time commitment were the reasons they could not participate. The flexible schedule of the Intro program might have helped here too. People were not great about filling out their scheduling availability given that it was a break.
Advertising
We advertised our programs via emails to our email list, a school wide email, tabling outside the dining hall, and talking to friends one on one. Something else that I think was helpful was to individually email people who had come to our events last semester and personally encourage them to apply; in retrospect it probably would have been good to set up an automation for doing this.
Tabling was relatively unsuccessful, in large part because it was finals week. This means other students were busy and didn’t want to talk, and we had limited organizer capacity to do this.
If I were to do this again, I would try to do the majority of advertising maybe 3 weeks before the end of the semester, rather than in the last 2 weeks as I did here.
Another tip
Something that increased Intro Fellowship starting rates was using Mailchimp to send out the acceptance/introduction email. I used email tracking to see who did not open the syllabus links, and individually texted them a reminder. A couple people said this was helpful because they were not checking emails during break.
I am quite uncertain about whether it is ethically acceptable to use email tracking, however trying it here was successful in so far as it helped the attendance of interested people. Due to my uncertainty here, I am unsure if others should do this.
Thanks to James Lucassen for comments on this write up and to James and Mia Taylor for facilitating help