My thoughts on this one are mostly critical: I think it fundamentally misunderstands what EA is about (due to relying too heavily on a single book for its conception of EA), and will not be persuasive to many EAs. But it raises a few interesting critiques of EA prioritization at the end.
Summary
The Most Good You Can Do has a list (referred to as “The List”) of some prototypical EA projects; roughly: “earn to give, community building, working in government, research, organizing, organ donation.”
Thesis of the piece: “Building a good home” should be on The List.
Some reasons it’s good to build a good home: having a refuge (physical and psychological safety), showing hospitality to others, raising a family.
I was expecting to see discussion of externalities here; perhaps focusing on how creating a good home can boost effectiveness in other altruistic endeavors, or how there are more spillovers to society than might be expected. The latter shows up a bit, but this mostly discusses benefits to the people who physically enter your home.
Traditional EA priorities have been critiqued on the following grounds:
Building a good home is not subject to these criticisms: it’s not overly demanding, it’s intrinsically motivating (or at least more than traditional EA interventions), it clearly produces direct good outcomes and isn’t subject to difficult-to-determine n-th order effects.
According to Singer, EAs don’t need to maximize the good at all times, and don’t have to be perfectly impartial. So it’s not necessary to discuss whether this is among the most effective interventions in order to argue that this should be an EA priority—effectively creating some good is enough.
(IMO this is simply a misunderstanding of EA, and undermines much of the article.)
Why do EAs ignore this issue? Some suggestions:
It’s not effective enough to count as an EA priority. (The rest of the article is arguing against this point.)
Status: It’s lower-status than other EA priorities, like donating lots of money to charity or producing interesting research
It’s less amenable to calculation
EAs have a bias toward “direct” rather than “indirect” forms of benevolence
(This seems in tension with the point from earlier about how reading to your kid produces clear, direct value, in contrast to the unclear and more-prone-to-backfire approach of donating to Oxfam. I also think EAs are super willing to consider indirect benevolence, but I digress.)
Politics: “Building a home” is conservative-coded in the US, and EA is left-leaning.
What I liked best
I think the “status” and “politics” critiques of EA prioritization are useful and probably under-discussed.
Certain fields (e.g. AI safety research) are often critiqued for being suspiciously interesting / high-status / high-paying, but this makes the case that even donating to GiveWell is a little suspicious in how much status it can buy. (But I think there are likely much more efficient ways to buy status; donating 1% of your income probably buys much more than 1⁄10 the status you’d get from donating 10%.)
I also think it’s reasonably likely that there are some conservative-coded causes that EAs undervalue for purely political reasons (but I don’t have any concrete examples at hand).
Critiques
There are a few fundamental issues with the analysis that cause this to fail to connect for me.
(this is a bit scattershot; I tried to narrow it down to a few points to prevent this from being 3x longer)
It’s too anchored on Singer’s description of EA in The Most Good You Can Do, rather than the current priorities of the community.
A recurring example is “should you work an extra hour at Starbucks to donate $10 to Oxfam, or spend that hour hosting friends or reading a story to your kid?”
Oxfam is not currently a frequently-recommended charity in EA circles (it’s not recommended by GWWC, although Singer’s org The Life You Can Save does recommend it).
I’ve never heard “work a low-wage job to give” advocated as a top EA recommendation, so this isn’t a strong point of comparison.
It doesn’t engage with the typical criteria for EA causes (e.g. the ITN framework), and especially fails to engage with on-the-margin thinking.
“Are we to believe that effective altruists think that if we had more bad homes, this would not affect how much people care about the global poor or give to charities? Surely not.”
The question of how big this impact is, or how much a marginal increase in “good homes” creates a marginal increase in charitable giving (and how that compares to other approaches to increasing donations) is not discussed.
“If large numbers of people were regularly giving much of their income to charity and donating their kidneys, these activities would not thereby cease being acts of effective altruism. So, home life cannot be excluded from the List simply because many people already do it.”
Neglectedness is a key consideration for determining EA priorities: if there were no shortage of kidney donors, the argument for kidney-donation-as-effective-altruism would indeed be much weaker.
Rather than arguing directly that “building a good home” has positive externalities on par with the good done by other EA priorities, the main argument seems to be something like “this is technically compatible with the definition of effective altruism in TMGYCD.”
From the conclusion of section VII: “Assuming home life is an effective way of [creating] great good for the world, then effective altruists should have no complaint about recommending it as one potential expression of effective altruism. … [Otherwise,] the effective altruist commits to a very demanding view, one they should state and defend.”
I think this conflates “demandingness” (asking people to sacrifice a lot) with “having a high bar for declaring something an EA intervention.” For instance, you can recommend only the top 0.01% of charities, but still only ask people to give 10%.
EAs do state and defend the view that there should be a very high bar for what counts as an EA intervention.
Thanks for posting, these look super interesting!
I’m hoping to read (and possibly respond to) more, but I ~randomly started with the final article “Saving the World Starts at Home.”
My thoughts on this one are mostly critical: I think it fundamentally misunderstands what EA is about (due to relying too heavily on a single book for its conception of EA), and will not be persuasive to many EAs. But it raises a few interesting critiques of EA prioritization at the end.
Summary
The Most Good You Can Do has a list (referred to as “The List”) of some prototypical EA projects; roughly: “earn to give, community building, working in government, research, organizing, organ donation.”
Thesis of the piece: “Building a good home” should be on The List.
Some reasons it’s good to build a good home: having a refuge (physical and psychological safety), showing hospitality to others, raising a family.
I was expecting to see discussion of externalities here; perhaps focusing on how creating a good home can boost effectiveness in other altruistic endeavors, or how there are more spillovers to society than might be expected. The latter shows up a bit, but this mostly discusses benefits to the people who physically enter your home.
Traditional EA priorities have been critiqued on the following grounds:
Demandingness
Motivational obstacles / they’re psychologically difficult
Epistemic limits: the world is very complicated
Ineffectiveness
Grift
Building a good home is not subject to these criticisms: it’s not overly demanding, it’s intrinsically motivating (or at least more than traditional EA interventions), it clearly produces direct good outcomes and isn’t subject to difficult-to-determine n-th order effects.
According to Singer, EAs don’t need to maximize the good at all times, and don’t have to be perfectly impartial. So it’s not necessary to discuss whether this is among the most effective interventions in order to argue that this should be an EA priority—effectively creating some good is enough.
(IMO this is simply a misunderstanding of EA, and undermines much of the article.)
Why do EAs ignore this issue? Some suggestions:
It’s not effective enough to count as an EA priority. (The rest of the article is arguing against this point.)
Status: It’s lower-status than other EA priorities, like donating lots of money to charity or producing interesting research
It’s less amenable to calculation
EAs have a bias toward “direct” rather than “indirect” forms of benevolence
(This seems in tension with the point from earlier about how reading to your kid produces clear, direct value, in contrast to the unclear and more-prone-to-backfire approach of donating to Oxfam. I also think EAs are super willing to consider indirect benevolence, but I digress.)
Politics: “Building a home” is conservative-coded in the US, and EA is left-leaning.
What I liked best
I think the “status” and “politics” critiques of EA prioritization are useful and probably under-discussed.
Certain fields (e.g. AI safety research) are often critiqued for being suspiciously interesting / high-status / high-paying, but this makes the case that even donating to GiveWell is a little suspicious in how much status it can buy. (But I think there are likely much more efficient ways to buy status; donating 1% of your income probably buys much more than 1⁄10 the status you’d get from donating 10%.)
I also think it’s reasonably likely that there are some conservative-coded causes that EAs undervalue for purely political reasons (but I don’t have any concrete examples at hand).
Critiques
There are a few fundamental issues with the analysis that cause this to fail to connect for me.
(this is a bit scattershot; I tried to narrow it down to a few points to prevent this from being 3x longer)
It’s too anchored on Singer’s description of EA in The Most Good You Can Do, rather than the current priorities of the community.
A recurring example is “should you work an extra hour at Starbucks to donate $10 to Oxfam, or spend that hour hosting friends or reading a story to your kid?”
Oxfam is not currently a frequently-recommended charity in EA circles (it’s not recommended by GWWC, although Singer’s org The Life You Can Save does recommend it).
I’ve never heard “work a low-wage job to give” advocated as a top EA recommendation, so this isn’t a strong point of comparison.
It doesn’t engage with the typical criteria for EA causes (e.g. the ITN framework), and especially fails to engage with on-the-margin thinking.
“Are we to believe that effective altruists think that if we had more bad homes, this would not affect how much people care about the global poor or give to charities? Surely not.”
The question of how big this impact is, or how much a marginal increase in “good homes” creates a marginal increase in charitable giving (and how that compares to other approaches to increasing donations) is not discussed.
“If large numbers of people were regularly giving much of their income to charity and donating their kidneys, these activities would not thereby cease being acts of effective altruism. So, home life cannot be excluded from the List simply because many people already do it.”
Neglectedness is a key consideration for determining EA priorities: if there were no shortage of kidney donors, the argument for kidney-donation-as-effective-altruism would indeed be much weaker.
Rather than arguing directly that “building a good home” has positive externalities on par with the good done by other EA priorities, the main argument seems to be something like “this is technically compatible with the definition of effective altruism in TMGYCD.”
From the conclusion of section VII: “Assuming home life is an effective way of [creating] great good for the world, then effective altruists should have no complaint about recommending it as one potential expression of effective altruism. … [Otherwise,] the effective altruist commits to a very demanding view, one they should state and defend.”
I think this conflates “demandingness” (asking people to sacrifice a lot) with “having a high bar for declaring something an EA intervention.” For instance, you can recommend only the top 0.01% of charities, but still only ask people to give 10%.
EAs do state and defend the view that there should be a very high bar for what counts as an EA intervention.