Hi, thanks again for the detailed reply — I really appreciate the clarity. I’m finding it genuinely eye-opening that many issues I assumed were morally significant turn out to matter far less in practice once scale and impact are properly quantified. I think I was heavily influenced by various online movements that are very loud and visible, so it confused me that EA rarely foregrounded topics like slave labor in chocolate or Coca-Cola’s water practices, despite covering other global issues such as malaria.
One thing I do want to clarify is that there are ethical chocolate companies using fair-trade, non–child-labor supply chains, so it’s not that “all chocolate must be boycotted,” but rather that many major brands have problematic sourcing. Still, your calculations make it clear that a solo boycott makes essentially no difference to the working hours or conditions of any child laborer, and similarly an individual boycott won’t meaningfully affect things like water extraction by Coca-Cola in Africa or India.
I am also not sure what you mean regarding your calculations about buying a child’s freedom for $9 per hour and also the whole valuing your own hour by $20-30. I mean to be honest from a consequentialist perspective there isn’t a difference between personally doing harm or letting harm to continue but in this case you aren’t I guess buying a child’s freedom you are just not forcing them to work an additional hour if that reframing makes sense. Kinda like veganism doesn’t save lives its just not about killing additional lives. To put it quite simply, by refusing to by slave labor chocolate you are not really helping people you are just not hurting them (and buying the chocolate leads to harm).
On the broader question of morally permissible actions, I’ve been strongly shaped by this Aeon article (“Why it is better not to aim at being morally perfect”). I agree that doing genuine moral good matters, but being a 10⁄10 moral saint is neither realistic nor psychologically healthy. That’s why I find Schelling points useful — for example, the 10% pledge. Without Schelling points, it feels like the only consistent utilitarian answer would be to live extremely frugally and donate almost everything. So my original question was really about which actions rise to the level of meaningful Schelling points. It seems that many things that online activists frame as huge moral imperatives (boycotting certain products, etc.) actually have very small expected impact and thus probably don’t qualify.
On veganism: I’ve been extremely strict (even avoiding foods with small amounts of egg or dairy, even while traveling), but seeing that roughly 75% of the EA Forum isn’t vegan does make me wonder whether relaxing a bit would still be morally acceptable. At the same time, I’m not fully comfortable with an attitude of “it’s fine to cause some harm as long as I donate to GiveWell later,” since that can be used to rationalize almost anything (e.g., “I’ll do X harmful thing like murder a man and offset it with $5k to AMF”). I understand the logic in small, low-impact cases, but taken broadly it seems like a slippery ethical framing.
A (slightly personal) question: do you think one could argue that you might actually have more impact as an aerospace engineer donating 10% of your income than by doing local EA organization work? I imagine it depends heavily on the quality of the contributions and the kinds of community-building work being done, but I’m curious how you think about that tradeoff.
Regarding longtermism: I’ll admit I’m somewhat biased. I’ve absorbed a lot of the “nothing ever happens” attitude, so doomsday scenarios often feel exaggerated to me. But setting that aside, I can acknowledge that global catastrophic risks like nuclear conflict, pandemics, and climate instability are real and non-zero. We literally just lived through a pandemic. My concern is that nearly all meaningful action in these areas ultimately seems to run through political institutions. Research can help, but if political leaders are uninformed or uninterested, the marginal value of EA research feels limited. That sense might also be influenced by my experience with college ethics classes — AI ethics, especially, often felt detached from real-world levers.
Realistically, it seems like the most impactful thing an individual can do for x-risk at the moment is vote for politicians who take these issues seriously, but politicians who are aware of (or influenced by) effective altruism seem rare.
Finally, several of the replies have made me think about the prisoner’s dilemma dynamic underlying many collective-action problems. With things like chocolate, it seems like individual action is (almost) negligible. Veganism is different because the per-unit harm is much larger. But I’m curious how EA generally thinks about prisoner’s dilemmas in areas like climate change, voting, or even the Donation Election. Why should I vote in the Donation Election if my individual vote is almost certainly not decisive? Or more broadly, when do extremely low-probability marginal contributions still matter?
Thanks again — the discussion has been really helpful in clarifying what actually matters versus what merely feels morally salient.
It will tell you how many animals are raised for your food, depending on your dietary type, and how much money would be needed in donations to offset that.
The moral upshot here is that eggs are far worse than dairy from an animal welfare perspective, mostly because cows are a lot larger than chickens. So if you feel like adding animal products to make your life convenient but worry about suffering, add dairy products.
And also donate to effective animal charities. There’s no reason to stick to the few $ per month (or fraction of a $ if it’s dairy) needed to offset the suffering from your diet—you can do much more good than that. Most EAs aren’t really into offsetting. We don’t actually think you should donate less to something because it’s more effective. This is just a calculator to attempt to explain more broadly why effective animal advocacy giving is good.
I skimmed through the website, and I’m not entirely sure how they’re calculating the dollar amounts. The comparisons also seem somewhat subjective, and some of the proposed impacts (e.g., creating more plant-based meat options) don’t obviously translate into measurable reductions in meat consumption.
I’m also not sure what they mean by this statement:
“We don’t actually think you should donate less to something because it’s more effective.”
“We don’t actually think you should donate less to something because it’s more effective.”
(All the below numbers are made up for example purposes and don’t represent the cost of chicken-related interventions)
Let’s say that I want to have chicken for dinner tonight. However, I don’t want to cause chickens to suffer. I have worked out that by donating $0.10 to Chicken Charity A I can prevent the same amount of suffering that eating a chicken dinner would cause, so I do that. Then I find out that Chicken Charity B can do the same thing for $0.05, so I do that instead for tomorrow night’s chicken dinner. A charity being 2x as effective means I donate half as much to it. This is the “offsetting” mindset.
Effective Altruists do not (usually) think this way. We don’t consider our donations as aiming to do a fixed amount of good and maximise effectiveness in order to reduce the amount we have to donate. We do it the other way around, usually: a fixed amount that is set by our life circumstances (e.g. the 10% pledge) and maximising the effectiveness of that in order to do as much good as possible.
Hi, thanks again for the detailed reply — I really appreciate the clarity. I’m finding it genuinely eye-opening that many issues I assumed were morally significant turn out to matter far less in practice once scale and impact are properly quantified. I think I was heavily influenced by various online movements that are very loud and visible, so it confused me that EA rarely foregrounded topics like slave labor in chocolate or Coca-Cola’s water practices, despite covering other global issues such as malaria.
One thing I do want to clarify is that there are ethical chocolate companies using fair-trade, non–child-labor supply chains, so it’s not that “all chocolate must be boycotted,” but rather that many major brands have problematic sourcing. Still, your calculations make it clear that a solo boycott makes essentially no difference to the working hours or conditions of any child laborer, and similarly an individual boycott won’t meaningfully affect things like water extraction by Coca-Cola in Africa or India.
I am also not sure what you mean regarding your calculations about buying a child’s freedom for $9 per hour and also the whole valuing your own hour by $20-30. I mean to be honest from a consequentialist perspective there isn’t a difference between personally doing harm or letting harm to continue but in this case you aren’t I guess buying a child’s freedom you are just not forcing them to work an additional hour if that reframing makes sense. Kinda like veganism doesn’t save lives its just not about killing additional lives. To put it quite simply, by refusing to by slave labor chocolate you are not really helping people you are just not hurting them (and buying the chocolate leads to harm).
On the broader question of morally permissible actions, I’ve been strongly shaped by this Aeon article (“Why it is better not to aim at being morally perfect”). I agree that doing genuine moral good matters, but being a 10⁄10 moral saint is neither realistic nor psychologically healthy. That’s why I find Schelling points useful — for example, the 10% pledge. Without Schelling points, it feels like the only consistent utilitarian answer would be to live extremely frugally and donate almost everything. So my original question was really about which actions rise to the level of meaningful Schelling points. It seems that many things that online activists frame as huge moral imperatives (boycotting certain products, etc.) actually have very small expected impact and thus probably don’t qualify.
On veganism: I’ve been extremely strict (even avoiding foods with small amounts of egg or dairy, even while traveling), but seeing that roughly 75% of the EA Forum isn’t vegan does make me wonder whether relaxing a bit would still be morally acceptable. At the same time, I’m not fully comfortable with an attitude of “it’s fine to cause some harm as long as I donate to GiveWell later,” since that can be used to rationalize almost anything (e.g., “I’ll do X harmful thing like murder a man and offset it with $5k to AMF”). I understand the logic in small, low-impact cases, but taken broadly it seems like a slippery ethical framing.
A (slightly personal) question: do you think one could argue that you might actually have more impact as an aerospace engineer donating 10% of your income than by doing local EA organization work? I imagine it depends heavily on the quality of the contributions and the kinds of community-building work being done, but I’m curious how you think about that tradeoff.
Regarding longtermism: I’ll admit I’m somewhat biased. I’ve absorbed a lot of the “nothing ever happens” attitude, so doomsday scenarios often feel exaggerated to me. But setting that aside, I can acknowledge that global catastrophic risks like nuclear conflict, pandemics, and climate instability are real and non-zero. We literally just lived through a pandemic. My concern is that nearly all meaningful action in these areas ultimately seems to run through political institutions. Research can help, but if political leaders are uninformed or uninterested, the marginal value of EA research feels limited. That sense might also be influenced by my experience with college ethics classes — AI ethics, especially, often felt detached from real-world levers.
Realistically, it seems like the most impactful thing an individual can do for x-risk at the moment is vote for politicians who take these issues seriously, but politicians who are aware of (or influenced by) effective altruism seem rare.
Finally, several of the replies have made me think about the prisoner’s dilemma dynamic underlying many collective-action problems. With things like chocolate, it seems like individual action is (almost) negligible. Veganism is different because the per-unit harm is much larger. But I’m curious how EA generally thinks about prisoner’s dilemmas in areas like climate change, voting, or even the Donation Election. Why should I vote in the Donation Election if my individual vote is almost certainly not decisive? Or more broadly, when do extremely low-probability marginal contributions still matter?
Thanks again — the discussion has been really helpful in clarifying what actually matters versus what merely feels morally salient.
Re: veganism, have you seen the FarmKind compassion calculator? https://www.farmkind.giving/compassion-calculator#try-it
It will tell you how many animals are raised for your food, depending on your dietary type, and how much money would be needed in donations to offset that.
The moral upshot here is that eggs are far worse than dairy from an animal welfare perspective, mostly because cows are a lot larger than chickens. So if you feel like adding animal products to make your life convenient but worry about suffering, add dairy products.
And also donate to effective animal charities. There’s no reason to stick to the few $ per month (or fraction of a $ if it’s dairy) needed to offset the suffering from your diet—you can do much more good than that. Most EAs aren’t really into offsetting. We don’t actually think you should donate less to something because it’s more effective. This is just a calculator to attempt to explain more broadly why effective animal advocacy giving is good.
I skimmed through the website, and I’m not entirely sure how they’re calculating the dollar amounts. The comparisons also seem somewhat subjective, and some of the proposed impacts (e.g., creating more plant-based meat options) don’t obviously translate into measurable reductions in meat consumption.
I’m also not sure what they mean by this statement:
“We don’t actually think you should donate less to something because it’s more effective.”
(All the below numbers are made up for example purposes and don’t represent the cost of chicken-related interventions)
Let’s say that I want to have chicken for dinner tonight. However, I don’t want to cause chickens to suffer. I have worked out that by donating $0.10 to Chicken Charity A I can prevent the same amount of suffering that eating a chicken dinner would cause, so I do that. Then I find out that Chicken Charity B can do the same thing for $0.05, so I do that instead for tomorrow night’s chicken dinner. A charity being 2x as effective means I donate half as much to it. This is the “offsetting” mindset.
Effective Altruists do not (usually) think this way. We don’t consider our donations as aiming to do a fixed amount of good and maximise effectiveness in order to reduce the amount we have to donate. We do it the other way around, usually: a fixed amount that is set by our life circumstances (e.g. the 10% pledge) and maximising the effectiveness of that in order to do as much good as possible.