Thanks for posting this!
One worry I have, particularly relevant to a Project Based Fellowship, is that it would not involve sufficiently learning key ideas. Mauricio discussed this, but I think there’s even more to it than is obvious. In this critique of EA (https://www.lesswrong.com/posts/CZmkPvzkMdQJxXy54/another-critique-of-effective-altruism), it is brought up that we frequently “Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.” The less content presented in a fellowship, the more likely we are to go down that route, I think; EA is really really complex, and one thing I like about the Intro Fellowship is that you can end it thinking “I have the basics, but there is so much more to know,” and I worry with a shorter fellowship participants may not realize how little they’ve scratched the surface. They may come to identify EA with just RCT-backed global poverty related work; it almost feels better if people think of EA as global poverty + animal welfare + AI + longtermism + pandemics and climate change – even though this is cause areas and not principles. Anecdotally, I’ve found that many folks just learning about EA are turned off by what feels like armchair cause prio that is too theoretical; giving them specific causes makes more sense for many folks, and if you give them enough causes, they will internalize that EA is actually about the principles which lead to such diversity in causes.
While I share your worry of EA becoming defined by cause areas than principles, it feels much more likely that we would get a situation like Mauricio mentioned of “vaguely EA-related project ideas” and people who walk away from the fellowship without actually understanding EA very well. On this note, conversations with students not involved in EA often go like so:
Them: “What does your club do?”
Me: “We discuss way of improving the world most effectively and prepare students to do something really valuable with their lives”
Them: “do you do anything besides talking?!”
Me: “Do career workshops count?...”
At least at the Claremont Colleges, students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school’s carbon footprint? etc.). We’ve been working on Cause Prioritization for this, narrowing down a large list into a small one. And we’re going to have small groups of students tackle these projects in the spring. Will follow up with forum posts afterward to report on how it went.
However, I don’t think doing this alone is a good idea; it doesn’t actually give folks a sense of what EA is all about unless they already had good background knowledge. So, this Winter Break, we’re doing a bunch of programming that we are pushing super hard. Mainly, taking the 8 week Intro Fellowship and squishing it into 3.5-4 weeks; this is the main program we want people to do. The idea is, folks learn about EA ideas during break when they’re not stressed about class, then we come back to school and the post-fellowship engagement is Project Based Fellowship (I expect for most people this will be good), and Career Planning. I’m optimistic about this plan for a bunch of reasons, and it potentially presents one solution to the problem.
Pros of doing this: students don’t have fellowship overlapping with school, fairly intense and fast which has the benefits you discuss, keep students connected to one another and mentally engaged during break (very good in my opinion/experience cuz I get lonely and lazy).
This is similar to the 3 week fellowship sprint you suggest, except that I do not think of this as at all about identifying promising fellows. I need to write up my thoughts on this more thoroughly in a shortform post, but pretty much I think the content of the Intro Fellowship would be useful to like 50-80% of students, even if only 20% continue engaging with EA afterward. EA has really good ideas that are useful to almost everybody, and the emphasis on highly promising people seems elitist and holds us back from impacting more students in a smaller way.
students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school’s carbon footprint? etc.).
Hm, I’m kind of nervous about the norms an EA group might set by limiting its projects’ ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it’s aimed at helping people in extreme poverty than US college students… what group norms might we be setting if our projects’ scope overlooks this?
At the same time, I think you’re spot on in seeing that many students want to do projects, and I really appreciate your work toward offering something to these students. As a tweak on the approach you discuss, what are your intuitions about having group members do projects with global scope? I know there’s a bunch of EA undergrads who are working on projects like doing research on EA causes, or running classes on AI safety or alternative proteins, or compiling relevant internship opportunities, or running training programs that help prepare people to tackle global issues, or running global EA outreach programs. This makes me optimistic that global-scope projects:
Are feasible (since they’re being done)
Are enough to excite the students who want to get to doing stuff
And have a decent amount of direct impact, while reinforcing core EA mindsets
Good points. We should have explained what our approach is in a separate post that we could link to; because I didn’t explain it too well in my comment.
We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way.
Put another way, the primary goals are skill building and building our club’s reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don’t (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.
Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.
One reason we’re focusing on local is that the “international charity is colonialism” sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.
Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying “Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%”. Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don’t think about the world in Expected Value.
Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it’s unclear I would actually “do” anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.
Thanks for the thoughtful response! I think you’re right that EA projects being legibly good to people unsympathetic with the community is tough.
It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way
I like the first part; I’m still a bit nervous about the second part? Like, isn’t one of the core insights of EA that “we can and should do much better than ‘small but meaningful’”?
And I guess even with the first part (local projects as practice), advice I’ve heard about practice in many other contexts (e.g. practicing skills for school, or musical instruments, or sports, or teaching computers to solve problems by trial and error) is that practice is most useful when it’s as close as possible to the real thing. So maybe we can give group members even better practice by encouraging them to practice unbounded prioritization/projects?
I think it makes for crappy advertising because most people don’t think about the world in Expected Value
There’s a tricky question here about who the target audience of our advertising is. I think you’re right that working on mainstream/visible problems is good for appealing to the average college student. But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
And there seems to be a tradeoff where branding/style that strongly appeals to the average student might be a turnoff for the above audiences. The above audiences are of course much smaller in number, but I suspect they make up for it by being much more likely to—given the right environment—get very into this stuff and have tons of impact. Personally, I think there’s a good chance I wouldn’t have gotten very involved with my local group (which I’m guessing would have significantly decreased my future impact, although I wouldn’t have known it) if it hadn’t been clear to me that they were serious about this stuff.
I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time
That’s fair. I guess we could say one could always spend that hour making extra money to give away, although that’s kind of a copout (and doesn’t address the optics issue).
(As a side note which isn’t very decision-relevant / probably preaching to the choir, it really annoys me that some people think the anti-colonialist move is to let poor foreign kids die of malaria.)
Lastly, two other tactics for advertising/optics that I’m optimistic about:
With things like AI safety, I think you’re right that most of the actual good done is just in expectation and won’t be clear for a while. But I’m not sure it’s only good in expectation—I’m optimistic that there’s lots of potential for longtermist work to have good, higher-probability spillover effects in the near term. For example, even if work on AI interpretability doesn’t help avoid AI deception, it may be more clearly a step toward mitigating algorithmic bias. Or even if work on truthful AI also doesn’t help avoid AI deception, maybe it can help mitigate the misuse of AI to create misinformation. I imagine there’s similar nice spillovers in biosecurity. Emphasizing such benefits / potential applications might be enough to appeal to risk-averse, scope-insensitive audiences.
I’m also optimistic that these mainstream audiences would see tangible “intermediate steps” toward impact as progress, even if they hadn’t clearly paid out yet. E.g. I suspect “we ran a class on alternative protein, and it got 100+ students, and this is important for addressing sustainability and zoonotic disease and animal abuse” will sound like concrete, tangible impact to the not-so-analytical audiences we’re talking about, even though its impact remains to be seen.
Tangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this). I wonder what this means for EA groups.
(On the other hand, I’m not sure this is a very useful set of analogies—maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which the “practice” examples emphasize. And making someone do scale/sports drills probably isn’t the best way to get them interested in something.)
Again, thank you for some amazing thoughts. I’ll only respond to one piece:
\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
\end{quotation}
I obviously can’t disagree with your anecdotal experience, but I think what you’re talking about here is closely related to what I see as one of EA’s biggest flaws: lack of diversity. I’m not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of “receptive to EA ideas” people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we’re doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.
On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn’t familiar with EA) learns something that could be useful to their life – even if they don’t join the community for become more involved. I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Shortform post coming soon about this ‘projects idea’ where I’ll lay out the pros and cons.
Good points! Agree that reaching out beyond overrepresented EA demographics is important—I’m also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.)
I’m not convinced that weird people know how to do good better than anybody else
I also wouldn’t make that claim about “weird people” in general. Still, I think it’s pretty straightforward that people who are unusual along certain traits know how to do good better than others, e.g. people who are unusually concerned with doing good well will probably do good better than people who don’t care that much.
I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Man, I don’t know, I really buy that we’re always in triage, and that unfortunately choosing a less altruistically efficient allocation of resources just amounts to letting more bad things happen. I agree it’s a shame if some well-off people don’t get the nice personal enrichment of an EA fellowship—but it seems so much worse if, like, more kids die because we couldn’t face hard decisions and focus our resources on what would help the most.
Edit: on rereading I realize I may have interpreted your comment too literally—sorry if I misunderstood. Maybe your point about efficient allocation was that some forms of meta-EA might naively look like efficient allocation of resources without being all that efficient (because of e.g. missing out on benefits of diversity), so less naive efficiency-seeking may be warranted? I’m sympathetic to that.
I think I strongly agree with the value of learning about at least the core arguments for a bunch of different causes. Taking seriously that some people are devoting their whole lives to making the future go better, or worrying about lie detection, or pandemics that have never happened, or digital people, or animals that seem to most people nonsentient really pushes your mind in a particular way, and in some ways, the weirder the better, at least for the purpose of really expanding what people think of when they think of “doing good”
Having a structured set of resources that people could engage with on breaks seems really valuable. It could let highly engaged participants who want to go faster do the “Thanksgiving Break” bingeread, or the “One/two week break” set of readings, or so on, with all of those having activities/interactive elements of that seems valuable. Is this something you’re thinking of writing up?
Thanks for posting this! One worry I have, particularly relevant to a Project Based Fellowship, is that it would not involve sufficiently learning key ideas. Mauricio discussed this, but I think there’s even more to it than is obvious. In this critique of EA (https://www.lesswrong.com/posts/CZmkPvzkMdQJxXy54/another-critique-of-effective-altruism), it is brought up that we frequently “Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.” The less content presented in a fellowship, the more likely we are to go down that route, I think; EA is really really complex, and one thing I like about the Intro Fellowship is that you can end it thinking “I have the basics, but there is so much more to know,” and I worry with a shorter fellowship participants may not realize how little they’ve scratched the surface. They may come to identify EA with just RCT-backed global poverty related work; it almost feels better if people think of EA as global poverty + animal welfare + AI + longtermism + pandemics and climate change – even though this is cause areas and not principles. Anecdotally, I’ve found that many folks just learning about EA are turned off by what feels like armchair cause prio that is too theoretical; giving them specific causes makes more sense for many folks, and if you give them enough causes, they will internalize that EA is actually about the principles which lead to such diversity in causes.
While I share your worry of EA becoming defined by cause areas than principles, it feels much more likely that we would get a situation like Mauricio mentioned of “vaguely EA-related project ideas” and people who walk away from the fellowship without actually understanding EA very well. On this note, conversations with students not involved in EA often go like so: Them: “What does your club do?” Me: “We discuss way of improving the world most effectively and prepare students to do something really valuable with their lives” Them: “do you do anything besides talking?!” Me: “Do career workshops count?...”
At least at the Claremont Colleges, students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school’s carbon footprint? etc.). We’ve been working on Cause Prioritization for this, narrowing down a large list into a small one. And we’re going to have small groups of students tackle these projects in the spring. Will follow up with forum posts afterward to report on how it went.
However, I don’t think doing this alone is a good idea; it doesn’t actually give folks a sense of what EA is all about unless they already had good background knowledge. So, this Winter Break, we’re doing a bunch of programming that we are pushing super hard. Mainly, taking the 8 week Intro Fellowship and squishing it into 3.5-4 weeks; this is the main program we want people to do. The idea is, folks learn about EA ideas during break when they’re not stressed about class, then we come back to school and the post-fellowship engagement is Project Based Fellowship (I expect for most people this will be good), and Career Planning. I’m optimistic about this plan for a bunch of reasons, and it potentially presents one solution to the problem.
Pros of doing this: students don’t have fellowship overlapping with school, fairly intense and fast which has the benefits you discuss, keep students connected to one another and mentally engaged during break (very good in my opinion/experience cuz I get lonely and lazy).
This is similar to the 3 week fellowship sprint you suggest, except that I do not think of this as at all about identifying promising fellows. I need to write up my thoughts on this more thoroughly in a shortform post, but pretty much I think the content of the Intro Fellowship would be useful to like 50-80% of students, even if only 20% continue engaging with EA afterward. EA has really good ideas that are useful to almost everybody, and the emphasis on highly promising people seems elitist and holds us back from impacting more students in a smaller way.
Thanks for this! Tangent:
Hm, I’m kind of nervous about the norms an EA group might set by limiting its projects’ ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it’s aimed at helping people in extreme poverty than US college students… what group norms might we be setting if our projects’ scope overlooks this?
At the same time, I think you’re spot on in seeing that many students want to do projects, and I really appreciate your work toward offering something to these students. As a tweak on the approach you discuss, what are your intuitions about having group members do projects with global scope? I know there’s a bunch of EA undergrads who are working on projects like doing research on EA causes, or running classes on AI safety or alternative proteins, or compiling relevant internship opportunities, or running training programs that help prepare people to tackle global issues, or running global EA outreach programs. This makes me optimistic that global-scope projects:
Are feasible (since they’re being done)
Are enough to excite the students who want to get to doing stuff
And have a decent amount of direct impact, while reinforcing core EA mindsets
Good points. We should have explained what our approach is in a separate post that we could link to; because I didn’t explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club’s reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don’t (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.
Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.
One reason we’re focusing on local is that the “international charity is colonialism” sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.
Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying “Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%”. Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don’t think about the world in Expected Value.
Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it’s unclear I would actually “do” anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.
Thanks for the thoughtful response! I think you’re right that EA projects being legibly good to people unsympathetic with the community is tough.
I like the first part; I’m still a bit nervous about the second part? Like, isn’t one of the core insights of EA that “we can and should do much better than ‘small but meaningful’”?
And I guess even with the first part (local projects as practice), advice I’ve heard about practice in many other contexts (e.g. practicing skills for school, or musical instruments, or sports, or teaching computers to solve problems by trial and error) is that practice is most useful when it’s as close as possible to the real thing. So maybe we can give group members even better practice by encouraging them to practice unbounded prioritization/projects?
There’s a tricky question here about who the target audience of our advertising is. I think you’re right that working on mainstream/visible problems is good for appealing to the average college student. But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
And there seems to be a tradeoff where branding/style that strongly appeals to the average student might be a turnoff for the above audiences. The above audiences are of course much smaller in number, but I suspect they make up for it by being much more likely to—given the right environment—get very into this stuff and have tons of impact. Personally, I think there’s a good chance I wouldn’t have gotten very involved with my local group (which I’m guessing would have significantly decreased my future impact, although I wouldn’t have known it) if it hadn’t been clear to me that they were serious about this stuff.
That’s fair. I guess we could say one could always spend that hour making extra money to give away, although that’s kind of a copout (and doesn’t address the optics issue).
(As a side note which isn’t very decision-relevant / probably preaching to the choir, it really annoys me that some people think the anti-colonialist move is to let poor foreign kids die of malaria.)
Lastly, two other tactics for advertising/optics that I’m optimistic about:
With things like AI safety, I think you’re right that most of the actual good done is just in expectation and won’t be clear for a while. But I’m not sure it’s only good in expectation—I’m optimistic that there’s lots of potential for longtermist work to have good, higher-probability spillover effects in the near term. For example, even if work on AI interpretability doesn’t help avoid AI deception, it may be more clearly a step toward mitigating algorithmic bias. Or even if work on truthful AI also doesn’t help avoid AI deception, maybe it can help mitigate the misuse of AI to create misinformation. I imagine there’s similar nice spillovers in biosecurity. Emphasizing such benefits / potential applications might be enough to appeal to risk-averse, scope-insensitive audiences.
I’m also optimistic that these mainstream audiences would see tangible “intermediate steps” toward impact as progress, even if they hadn’t clearly paid out yet. E.g. I suspect “we ran a class on alternative protein, and it got 100+ students, and this is important for addressing sustainability and zoonotic disease and animal abuse” will sound like concrete, tangible impact to the not-so-analytical audiences we’re talking about, even though its impact remains to be seen.
Tangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this). I wonder what this means for EA groups.
(On the other hand, I’m not sure this is a very useful set of analogies—maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which the “practice” examples emphasize. And making someone do scale/sports drills probably isn’t the best way to get them interested in something.)
Again, thank you for some amazing thoughts. I’ll only respond to one piece:
\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation}
I obviously can’t disagree with your anecdotal experience, but I think what you’re talking about here is closely related to what I see as one of EA’s biggest flaws: lack of diversity. I’m not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of “receptive to EA ideas” people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we’re doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.
On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn’t familiar with EA) learns something that could be useful to their life – even if they don’t join the community for become more involved. I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Shortform post coming soon about this ‘projects idea’ where I’ll lay out the pros and cons.
Good points! Agree that reaching out beyond overrepresented EA demographics is important—I’m also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.)
I also wouldn’t make that claim about “weird people” in general. Still, I think it’s pretty straightforward that people who are unusual along certain traits know how to do good better than others, e.g. people who are unusually concerned with doing good well will probably do good better than people who don’t care that much.
Man, I don’t know, I really buy that we’re always in triage, and that unfortunately choosing a less altruistically efficient allocation of resources just amounts to letting more bad things happen. I agree it’s a shame if some well-off people don’t get the nice personal enrichment of an EA fellowship—but it seems so much worse if, like, more kids die because we couldn’t face hard decisions and focus our resources on what would help the most.
Edit: on rereading I realize I may have interpreted your comment too literally—sorry if I misunderstood. Maybe your point about efficient allocation was that some forms of meta-EA might naively look like efficient allocation of resources without being all that efficient (because of e.g. missing out on benefits of diversity), so less naive efficiency-seeking may be warranted? I’m sympathetic to that.
I think I strongly agree with the value of learning about at least the core arguments for a bunch of different causes. Taking seriously that some people are devoting their whole lives to making the future go better, or worrying about lie detection, or pandemics that have never happened, or digital people, or animals that seem to most people nonsentient really pushes your mind in a particular way, and in some ways, the weirder the better, at least for the purpose of really expanding what people think of when they think of “doing good”
Having a structured set of resources that people could engage with on breaks seems really valuable. It could let highly engaged participants who want to go faster do the “Thanksgiving Break” bingeread, or the “One/two week break” set of readings, or so on, with all of those having activities/interactive elements of that seems valuable. Is this something you’re thinking of writing up?
Yes. Will do an end of the year assessment of what worked and what didn’t. Focus will likely be on Winter Break Programming and Project Fellowships.