I like the idea of having a more relatable message than “do the most good”, but I am not sure how much more relatable “do alot of good” is. To me it seems that there might not be that much of a difference between the two, at least in how they are used in day to day discussion (that is, applying a filter of “practicality” to the maximization problem).
For example, I thought it was common EA Knowledge that there are “top recommended cause areas” (on 80K), where some are higher on the list but with a big * of uncertainty. Theres also enough people to work on all of them, so there’s no need for a final judgement of a top 3, let alone 1 most important cause. In a way this could be a “macro” EA perspective—asking not what is the most good an individual could do, but what is the most good a group/society can do, with appropriate allocation between cause areas of high ITN.
I think EA can come across as a bit elitist to others, especially to people voulenteering in non-EA charities or trying to do good in “traditional” ways (doctors, Med-tech, activism, etc). Perhaps the “do alot of good” can help with that—but I still think it would come to similar conclusions in some cases.
I have a friend who is volenteering in “Make a Wish” for the past 10 years, and I felt a little uneasy telling him about EA without offending him—though I was able to, and while he was intrigued I don’t think he was convinced.
I had a thought a while ago, that perhaps the world would be much better if there were alot of people committed to doing “at least a little good” , rather than a (relatively) small group of highly ambitious people doing “the most good”. However, perhaps there is room for that as a separate movement from EA. Plus, someone for sure needs to work on the “big things” too, which seems like a good niche for EA.
Thank you for reading the post and for the helpful comment! I totally agree that “do a lot of good” isn’t particularly unique or sexy a message. As I mentioned in the post, I believe that part of EA’s early appeal was that maximization was so radical and elusive. I do think there is a fairly big difference between the two messages though, especially when it comes to elite donors. Academics like Anand Giridharadas (in Winners Take All) and Rob Reich (in Just Giving) argue that elite philanthropy is often a well-disguised charade to boost the donor’s own power and status. I’m sympathetic to the argument that sometimes, these elites are unaware of the ways in which their giving (e.g. to private schools, religious institutions, the arts, etc.) increase existing inequalities. However, it’s also easy to argue that some of these wealthy individuals are not doing “a lot of good”. I don’t think “do a lot of good” is some magic bullet that will fix billionaire philanthropy, but I also think that it might make it easier to convince elites they need to change their philanthropy (and other actions...) than the rhetoric of maximization.
Great point about 80k Hours. It’s certainly interesting to take a macro-level view of what a large group or whole society could do with respect to cause areas. It reminds me a little bit of trying to reason like a Rawlsian about ideal theory. Our decisions as individuals become incredibly contingent on what other members of the group decide, especially other members with wealth and power. In this ideal world, the “Neglect” area becomes arguably the most important, which seems to take away from the power of EA (because, as you say, our niche is taking on the big things not just the things that nobody is doing). I think we’re in a time when there are tons of Important and Tractable causes which aren’t being altogether neglected but still need more resources. All this is just roundabout way of saying I’m skeptical of both the usefulness and feasibility of trying to take the macro perspective and maximize a group’s impact.
I’m particularly sympathetic to your point about elitism, and part of my motivation for this post was to try to temper that problem within EA. In my conversations with friends about EA, it’s never the idea of maximization that changes their worldview. Instead, they’re usually more interested in the argument that there are big ways to make impact which elite philanthropy largely ignores. If you’re talking with somebody who dedicates her free time volunteering for an org like Make A Wish, maximization is a non-starter, but sewing the seed that there might be additional avenues for impact will at least allow for some discourse.
On your last point, I think that the “everybody just does a little good” world is already the world we live in! I agree that there is serious need to for groups of people to tackle the big things, but in an ideal world, this is what governments do. Just like many nonprofits say, EA’s main goal should be to not need to exist (because institutions are tackling high-ITN goals efficiently).
Great post, well explained!
I like the idea of having a more relatable message than “do the most good”, but I am not sure how much more relatable “do alot of good” is. To me it seems that there might not be that much of a difference between the two, at least in how they are used in day to day discussion (that is, applying a filter of “practicality” to the maximization problem).
For example, I thought it was common EA Knowledge that there are “top recommended cause areas” (on 80K), where some are higher on the list but with a big * of uncertainty. Theres also enough people to work on all of them, so there’s no need for a final judgement of a top 3, let alone 1 most important cause. In a way this could be a “macro” EA perspective—asking not what is the most good an individual could do, but what is the most good a group/society can do, with appropriate allocation between cause areas of high ITN.
I think EA can come across as a bit elitist to others, especially to people voulenteering in non-EA charities or trying to do good in “traditional” ways (doctors, Med-tech, activism, etc). Perhaps the “do alot of good” can help with that—but I still think it would come to similar conclusions in some cases. I have a friend who is volenteering in “Make a Wish” for the past 10 years, and I felt a little uneasy telling him about EA without offending him—though I was able to, and while he was intrigued I don’t think he was convinced.
I had a thought a while ago, that perhaps the world would be much better if there were alot of people committed to doing “at least a little good” , rather than a (relatively) small group of highly ambitious people doing “the most good”. However, perhaps there is room for that as a separate movement from EA. Plus, someone for sure needs to work on the “big things” too, which seems like a good niche for EA.
Thank you for reading the post and for the helpful comment! I totally agree that “do a lot of good” isn’t particularly unique or sexy a message. As I mentioned in the post, I believe that part of EA’s early appeal was that maximization was so radical and elusive. I do think there is a fairly big difference between the two messages though, especially when it comes to elite donors. Academics like Anand Giridharadas (in Winners Take All) and Rob Reich (in Just Giving) argue that elite philanthropy is often a well-disguised charade to boost the donor’s own power and status. I’m sympathetic to the argument that sometimes, these elites are unaware of the ways in which their giving (e.g. to private schools, religious institutions, the arts, etc.) increase existing inequalities. However, it’s also easy to argue that some of these wealthy individuals are not doing “a lot of good”. I don’t think “do a lot of good” is some magic bullet that will fix billionaire philanthropy, but I also think that it might make it easier to convince elites they need to change their philanthropy (and other actions...) than the rhetoric of maximization.
Great point about 80k Hours. It’s certainly interesting to take a macro-level view of what a large group or whole society could do with respect to cause areas. It reminds me a little bit of trying to reason like a Rawlsian about ideal theory. Our decisions as individuals become incredibly contingent on what other members of the group decide, especially other members with wealth and power. In this ideal world, the “Neglect” area becomes arguably the most important, which seems to take away from the power of EA (because, as you say, our niche is taking on the big things not just the things that nobody is doing). I think we’re in a time when there are tons of Important and Tractable causes which aren’t being altogether neglected but still need more resources. All this is just roundabout way of saying I’m skeptical of both the usefulness and feasibility of trying to take the macro perspective and maximize a group’s impact.
I’m particularly sympathetic to your point about elitism, and part of my motivation for this post was to try to temper that problem within EA. In my conversations with friends about EA, it’s never the idea of maximization that changes their worldview. Instead, they’re usually more interested in the argument that there are big ways to make impact which elite philanthropy largely ignores. If you’re talking with somebody who dedicates her free time volunteering for an org like Make A Wish, maximization is a non-starter, but sewing the seed that there might be additional avenues for impact will at least allow for some discourse.
On your last point, I think that the “everybody just does a little good” world is already the world we live in! I agree that there is serious need to for groups of people to tackle the big things, but in an ideal world, this is what governments do. Just like many nonprofits say, EA’s main goal should be to not need to exist (because institutions are tackling high-ITN goals efficiently).