students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school’s carbon footprint? etc.).
Hm, I’m kind of nervous about the norms an EA group might set by limiting its projects’ ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it’s aimed at helping people in extreme poverty than US college students… what group norms might we be setting if our projects’ scope overlooks this?
At the same time, I think you’re spot on in seeing that many students want to do projects, and I really appreciate your work toward offering something to these students. As a tweak on the approach you discuss, what are your intuitions about having group members do projects with global scope? I know there’s a bunch of EA undergrads who are working on projects like doing research on EA causes, or running classes on AI safety or alternative proteins, or compiling relevant internship opportunities, or running training programs that help prepare people to tackle global issues, or running global EA outreach programs. This makes me optimistic that global-scope projects:
Are feasible (since they’re being done)
Are enough to excite the students who want to get to doing stuff
And have a decent amount of direct impact, while reinforcing core EA mindsets
Good points. We should have explained what our approach is in a separate post that we could link to; because I didn’t explain it too well in my comment.
We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way.
Put another way, the primary goals are skill building and building our club’s reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don’t (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.
Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.
One reason we’re focusing on local is that the “international charity is colonialism” sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.
Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying “Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%”. Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don’t think about the world in Expected Value.
Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it’s unclear I would actually “do” anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.
Thanks for the thoughtful response! I think you’re right that EA projects being legibly good to people unsympathetic with the community is tough.
It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way
I like the first part; I’m still a bit nervous about the second part? Like, isn’t one of the core insights of EA that “we can and should do much better than ‘small but meaningful’”?
And I guess even with the first part (local projects as practice), advice I’ve heard about practice in many other contexts (e.g. practicing skills for school, or musical instruments, or sports, or teaching computers to solve problems by trial and error) is that practice is most useful when it’s as close as possible to the real thing. So maybe we can give group members even better practice by encouraging them to practice unbounded prioritization/projects?
I think it makes for crappy advertising because most people don’t think about the world in Expected Value
There’s a tricky question here about who the target audience of our advertising is. I think you’re right that working on mainstream/visible problems is good for appealing to the average college student. But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
And there seems to be a tradeoff where branding/style that strongly appeals to the average student might be a turnoff for the above audiences. The above audiences are of course much smaller in number, but I suspect they make up for it by being much more likely to—given the right environment—get very into this stuff and have tons of impact. Personally, I think there’s a good chance I wouldn’t have gotten very involved with my local group (which I’m guessing would have significantly decreased my future impact, although I wouldn’t have known it) if it hadn’t been clear to me that they were serious about this stuff.
I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time
That’s fair. I guess we could say one could always spend that hour making extra money to give away, although that’s kind of a copout (and doesn’t address the optics issue).
(As a side note which isn’t very decision-relevant / probably preaching to the choir, it really annoys me that some people think the anti-colonialist move is to let poor foreign kids die of malaria.)
Lastly, two other tactics for advertising/optics that I’m optimistic about:
With things like AI safety, I think you’re right that most of the actual good done is just in expectation and won’t be clear for a while. But I’m not sure it’s only good in expectation—I’m optimistic that there’s lots of potential for longtermist work to have good, higher-probability spillover effects in the near term. For example, even if work on AI interpretability doesn’t help avoid AI deception, it may be more clearly a step toward mitigating algorithmic bias. Or even if work on truthful AI also doesn’t help avoid AI deception, maybe it can help mitigate the misuse of AI to create misinformation. I imagine there’s similar nice spillovers in biosecurity. Emphasizing such benefits / potential applications might be enough to appeal to risk-averse, scope-insensitive audiences.
I’m also optimistic that these mainstream audiences would see tangible “intermediate steps” toward impact as progress, even if they hadn’t clearly paid out yet. E.g. I suspect “we ran a class on alternative protein, and it got 100+ students, and this is important for addressing sustainability and zoonotic disease and animal abuse” will sound like concrete, tangible impact to the not-so-analytical audiences we’re talking about, even though its impact remains to be seen.
Tangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this). I wonder what this means for EA groups.
(On the other hand, I’m not sure this is a very useful set of analogies—maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which the “practice” examples emphasize. And making someone do scale/sports drills probably isn’t the best way to get them interested in something.)
Again, thank you for some amazing thoughts. I’ll only respond to one piece:
\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
\end{quotation}
I obviously can’t disagree with your anecdotal experience, but I think what you’re talking about here is closely related to what I see as one of EA’s biggest flaws: lack of diversity. I’m not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of “receptive to EA ideas” people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we’re doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.
On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn’t familiar with EA) learns something that could be useful to their life – even if they don’t join the community for become more involved. I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Shortform post coming soon about this ‘projects idea’ where I’ll lay out the pros and cons.
Good points! Agree that reaching out beyond overrepresented EA demographics is important—I’m also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.)
I’m not convinced that weird people know how to do good better than anybody else
I also wouldn’t make that claim about “weird people” in general. Still, I think it’s pretty straightforward that people who are unusual along certain traits know how to do good better than others, e.g. people who are unusually concerned with doing good well will probably do good better than people who don’t care that much.
I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Man, I don’t know, I really buy that we’re always in triage, and that unfortunately choosing a less altruistically efficient allocation of resources just amounts to letting more bad things happen. I agree it’s a shame if some well-off people don’t get the nice personal enrichment of an EA fellowship—but it seems so much worse if, like, more kids die because we couldn’t face hard decisions and focus our resources on what would help the most.
Edit: on rereading I realize I may have interpreted your comment too literally—sorry if I misunderstood. Maybe your point about efficient allocation was that some forms of meta-EA might naively look like efficient allocation of resources without being all that efficient (because of e.g. missing out on benefits of diversity), so less naive efficiency-seeking may be warranted? I’m sympathetic to that.
Thanks for this! Tangent:
Hm, I’m kind of nervous about the norms an EA group might set by limiting its projects’ ambitions to its local community. Like, we know a dollar or an hour of work can do way more good if it’s aimed at helping people in extreme poverty than US college students… what group norms might we be setting if our projects’ scope overlooks this?
At the same time, I think you’re spot on in seeing that many students want to do projects, and I really appreciate your work toward offering something to these students. As a tweak on the approach you discuss, what are your intuitions about having group members do projects with global scope? I know there’s a bunch of EA undergrads who are working on projects like doing research on EA causes, or running classes on AI safety or alternative proteins, or compiling relevant internship opportunities, or running training programs that help prepare people to tackle global issues, or running global EA outreach programs. This makes me optimistic that global-scope projects:
Are feasible (since they’re being done)
Are enough to excite the students who want to get to doing stuff
And have a decent amount of direct impact, while reinforcing core EA mindsets
Good points. We should have explained what our approach is in a separate post that we could link to; because I didn’t explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club’s reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don’t (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.
Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.
One reason we’re focusing on local is that the “international charity is colonialism” sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.
Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying “Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%”. Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don’t think about the world in Expected Value.
Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it’s unclear I would actually “do” anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.
Thanks for the thoughtful response! I think you’re right that EA projects being legibly good to people unsympathetic with the community is tough.
I like the first part; I’m still a bit nervous about the second part? Like, isn’t one of the core insights of EA that “we can and should do much better than ‘small but meaningful’”?
And I guess even with the first part (local projects as practice), advice I’ve heard about practice in many other contexts (e.g. practicing skills for school, or musical instruments, or sports, or teaching computers to solve problems by trial and error) is that practice is most useful when it’s as close as possible to the real thing. So maybe we can give group members even better practice by encouraging them to practice unbounded prioritization/projects?
There’s a tricky question here about who the target audience of our advertising is. I think you’re right that working on mainstream/visible problems is good for appealing to the average college student. But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs
And there seems to be a tradeoff where branding/style that strongly appeals to the average student might be a turnoff for the above audiences. The above audiences are of course much smaller in number, but I suspect they make up for it by being much more likely to—given the right environment—get very into this stuff and have tons of impact. Personally, I think there’s a good chance I wouldn’t have gotten very involved with my local group (which I’m guessing would have significantly decreased my future impact, although I wouldn’t have known it) if it hadn’t been clear to me that they were serious about this stuff.
That’s fair. I guess we could say one could always spend that hour making extra money to give away, although that’s kind of a copout (and doesn’t address the optics issue).
(As a side note which isn’t very decision-relevant / probably preaching to the choir, it really annoys me that some people think the anti-colonialist move is to let poor foreign kids die of malaria.)
Lastly, two other tactics for advertising/optics that I’m optimistic about:
With things like AI safety, I think you’re right that most of the actual good done is just in expectation and won’t be clear for a while. But I’m not sure it’s only good in expectation—I’m optimistic that there’s lots of potential for longtermist work to have good, higher-probability spillover effects in the near term. For example, even if work on AI interpretability doesn’t help avoid AI deception, it may be more clearly a step toward mitigating algorithmic bias. Or even if work on truthful AI also doesn’t help avoid AI deception, maybe it can help mitigate the misuse of AI to create misinformation. I imagine there’s similar nice spillovers in biosecurity. Emphasizing such benefits / potential applications might be enough to appeal to risk-averse, scope-insensitive audiences.
I’m also optimistic that these mainstream audiences would see tangible “intermediate steps” toward impact as progress, even if they hadn’t clearly paid out yet. E.g. I suspect “we ran a class on alternative protein, and it got 100+ students, and this is important for addressing sustainability and zoonotic disease and animal abuse” will sound like concrete, tangible impact to the not-so-analytical audiences we’re talking about, even though its impact remains to be seen.
Tangent/caveat to my point about practice: Actually, it seems like in the examples I mentioned, practicing on easier versions of a problem first is often very helpful for being able to do good practice on equivalents of the real thing (e.g. musical scale drills, sport drills, this). I wonder what this means for EA groups.
(On the other hand, I’m not sure this is a very useful set of analogies—maybe the more important thing for people who are just getting into EA is for them to get interested in core EA mindsets/practices, rather than skilled in them, which the “practice” examples emphasize. And making someone do scale/sports drills probably isn’t the best way to get them interested in something.)
Again, thank you for some amazing thoughts. I’ll only respond to one piece:
\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation}
I obviously can’t disagree with your anecdotal experience, but I think what you’re talking about here is closely related to what I see as one of EA’s biggest flaws: lack of diversity. I’m not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of “receptive to EA ideas” people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we’re doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.
On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn’t familiar with EA) learns something that could be useful to their life – even if they don’t join the community for become more involved. I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Shortform post coming soon about this ‘projects idea’ where I’ll lay out the pros and cons.
Good points! Agree that reaching out beyond overrepresented EA demographics is important—I’m also optimistic that this can be done without turning off people who really jive with EA mindsets. (I wish I could offer more than anecdotes, but I think over half of the members of my local group who are just getting involved and seem most enthusiastic about EA stuff are women or POC.)
I also wouldn’t make that claim about “weird people” in general. Still, I think it’s pretty straightforward that people who are unusual along certain traits know how to do good better than others, e.g. people who are unusually concerned with doing good well will probably do good better than people who don’t care that much.
Man, I don’t know, I really buy that we’re always in triage, and that unfortunately choosing a less altruistically efficient allocation of resources just amounts to letting more bad things happen. I agree it’s a shame if some well-off people don’t get the nice personal enrichment of an EA fellowship—but it seems so much worse if, like, more kids die because we couldn’t face hard decisions and focus our resources on what would help the most.
Edit: on rereading I realize I may have interpreted your comment too literally—sorry if I misunderstood. Maybe your point about efficient allocation was that some forms of meta-EA might naively look like efficient allocation of resources without being all that efficient (because of e.g. missing out on benefits of diversity), so less naive efficiency-seeking may be warranted? I’m sympathetic to that.