I agree that with chocolate and exploited labor, the situation is similar to veganism insofar as if you buy some chocolate, then (via the mechanisms of supply and demand) that means more chocolate is gonna be harvested (although not necessarily by harvested by that particular company, right? so I think the argument works best only if the entire field of chocolate production is shot through with exploited labor?). Although, as Toby Chrisford points out in his comment, not all boycott campaigns are like this.
Thoughts on chocolate in particular
Reading the wikipedia page for chocolate & child labor, I agree that this seems like a more legit cause than “water privatization” or some of the other things I picked on. But if you are aiming for a veganism-style impact through supply and demand, it makes more sense to boycott chocolate in general, not a specific company that happens to make chocolate. (Perplexity says that Nestle controls only a single-digit percentage of the world’s chocolate market, “while the vast majority is produced by other companies such as Mars, Mondelez, Ferrero, and Hershey”—nor is Nestle even properly described as a chocolate company, since only about 15% of their revenue comes from chocolate! More comes from coffee, other beverages, and random other foods.)
In general I just get the feeling that you are choosing what to focus on based on which companies have encountered “major controversies” (ie charismatic news stories), rather than making an attempt to be scope-sensitive or thinks strategically.
“With something like slave labor in the chocolate supply chain, the impact of an individual purchase is very hard to quantify.”
Challenge accepted!!! Here are some random fermi calculations that I did to help me get a sense of scale on various things:
Google says that the average american consumes 100 lbs of chicken a year, and broiler chickens produce about 4 lbs of meat, so that’s 25 broiler chickens per year. Broiler chickens only live for around 8 weeks, so 25 chickens = at any given time, about four broiler chickens are living in misery in a factory farm, per year, per american. Toss in 1 egg-laying hen to produce about 1 egg per day, that’s five chickens per american.
How bad is chicken suffering? Idk, not that bad IMO, chickens are pretty simple. But I’m not a consciousness scientist (and sadly, nor is anybody else), so who knows!
Meanwhile with chocolate, the average american apparently consumes about 15 pounds of chocolate per year. (Wow, that’s a lot, but apparently europeans eat even more??) The total worldwide market for chocolate is 16 billion pounds per year. Wikipedia says that around 2 million children are involved in child-labor for harvesting cocoa in West Africa, while Perplexity (citing this article) estimates that “Including farmers’ families, workers in transport, trading, processing, manufacturing, marketing, and retail, roughly 40–50 million people worldwide are estimated to depend on the cocoa and chocolate supply chain for their income or employment.”
So the average American’s share of global consumption (15 / 16 billion, or about 1 billionth) is supporting the child labor of 2 million / 1 billion = 0.002 West African children. Or, another way of thinking about this is that (assuming child laborers work 12-hour days every day of the year, which is probably wrong but idk), the average American’s yearly chocolate consumption supports about 9 hours of child labor, plus about 180 hours of labor from all the adults involved in “transport, trading processing, manufacturing, marketing, and retail”, who are hopefully mostly all legitly-employed.
Sometimes for a snack, I make myself a little bowl of mixed nuts + dark chocolate chips + blueberries. I buy these little 0.6-pound bags of dark chocolate chips for $4.29 at the grocery store (which is about as cheap as it’s possible to buy chocolate); each one will typically last me a couple months. It’s REALLY dark chocolate, 72% cacao, so maybe in terms of child-labor-intensity, that’s equivalent to 4x as much normal milk chocolate, so child-labor-equivalent to like 2.5 lbs of milk chocolate? So each of these bags of dark chocolate involves about 1.5 hours of child labor.
The bags cost $4.29, but there is significant consumer surplus involved (otherwise I wouldn’t buy them!) Indeed, I’d probably buy them even if they cost twice as much! So let’s say that the cost of my significantly ycutting back my chocolate consumption is about $9 per bag.
So if I wanted to reduce child labor, I can buy 1 hour of a child’s freedom at a rate of about $9 per bag / 1.5 hours per bag = $6 per hour. (Obviously I can only buy a couple hours this way, because then my chocolate consumption would hit zero and I can’t reduce any more.)
That’s kind of expensive, actually! I only value my own time at around $20 - $30 per hour!
And it looks doubly expensive when you consider that givewell top charities can save an african child’s LIFE for about $5000 in donations—assuming 50 years life expectancy and 16 hours awake a day, that’s almost 300,000 hours of being alive versus dead. Meanwhile, if me and a bunch of my friends all decided to take the hit to our lifestyle in the form of foregone chocolate consumption instead of antimalarial bednet donations, that would only free up something like 833 hours of an african child doing leisure versus labor (which IMO seems less dramatic than being alive versus dead).
One could imagine taking a somewhat absurd “offsetting” approach, by continuing to enjoy my chocolate but donating 3 cents to Against Malaria Foundation for each bag of chocolate I buy—therefore creating 1.8 hours of untimely death --> life in expectation, for every 1.5 hours of child labor I incur.
Sorry to be “that guy”, but is child labor even bad in this context? Is it bad enough to offset the fact that trading with poor nations is generally good?
Obviously it’s bad for children (or for that matter, anyone), who ought to be enjoying their lives and working to fulfill their human potential, to be stuck doing tedious, dangerous work. But, it’s also bad to be poor!
Most child labor doesn’t seem to be slavery—the same wikipedia page that cites 2 million child laborers says there are estimated to be only 15,000 child slaves. (And that number includes not just cocoa, but also cotton and coffee.) So, most of it is more normal, compensated labor. (Albeit incredibly poorly compensated by rich-world standards—but that’s everything in rural west africa!)
By analogy with classic arguments like “sweatshops are good actually, because they are an important first step on the ladder of economic development, and they are often a better option for poor people than their realistic alternatives, like low-productivity agricultural work”, or the infamous Larry Summers controversy (no, not that one, the other one. no, the OTHER other one. no, not that one either...) about an IMF memo speculating about how it would be a win-win situation for developed countries to “export more pollution” to poorer nations, doing the economic transaction whereby I buy chocolate and it supports economic activity in west africa (an industry employing 40 million people, only 2 million of whom are child laborers) seems like it might be better than not doing it. So, the case for a personal boycott of chocolate seems weaker than a personal boycott of factory-farmed meat (where many of the workers are in the USA, which has much higher wages and much tighter / hotter labor markets).
“I am genuinely curious about what you consider to fall within the realm of morally permissible personal actions.”
This probably won’t be a very helpful response, but for what it’s worth:
I don’t think the language of moral obligations and permissibility and rules (what people call “deontology”) is a very good way to think about these issues of diffuse, collective, indirect harms like factory farming or labor exploitation.
As you are experiencing, deontology doesn’t offer much guidance on where to draw the line when it comes to increasingly minor, indirect, or incidental harms.
It’s also not clear what to do when there are conflicting effects at play—if an action is good for some reasons but also bad for other reasons.
Deontology doesn’t feel very scope-sensitive—it just says something like “don’t eat chocolate if child labor is involved!!” and nevermind if the industry is 100% child labor or 0.01% child labor. This kind of thinking seems to have a tendency to do the “copenhagen theory of ethics” thing where you just pile on more and more rules in an attempt to avoid being entangled with bad things, when instead it should be more concerned with identifying the most important bad things and figuring out how to spend extra energy addressing those, even while letting some more minor goals slide.
I think utilitarianism / consequentialism is a better way to think about diffuse, indirect harms, because it’s more scope-sensitive and it seems to allow for more grey areas and nuance. (Deontology just says that you must do some things and mustn’t do other forbidden things, and is neutral on everything else. But consequentialism rates actions on a spectrum from super-great to super-evil, with lots of medium shades in-between.) It’s also better at balancing conflicting effects—just add them all up!
Of course, trying to live ordinary daily life according to 100% utilitarian thinking and ethics feels just as crazy as trying to live life according to 100% deontological thinking. Virtue ethics often seems like a better guide to the majority of normal daily-life decisionmaking: try to behave honorably, try to be be caring and prudent and et cetera, doing your best to cultivate and apply whatever virtues seem most relevant to the situation at hand.
Personally, although I philosophically identify as a pretty consequentialist EA, in real life I (and, I think, many people) rely on kind of a mushy combination of ethical frameworks, trying to apply each framework to the area where it’s strongest.
As I see it, that’s virtue ethics for most of ordinary life—my social interactions, how I try to motivate myself to work and stay healthy, what kind of person I aim to be.
And I try to use consequentialist / utilitarian thinking to figure out “what are some of the MOST impactful things I could be doing, to do the MOST good in the world”. I don’t devote 100% of my efforts to doing this stuff (I am pretty selfish and lazy, like to have plenty of time to play videogames, etc), but I figure if I spend even a smallish fraction of my time (like 20%) aimed at doing whatever I think is the most morally-good thing I could possibly do, then I will accomplish a lot of good while sacrificing only a little. (In practice, the main way this has played out in my actual life is that I left my career in aerospace engineering in favor of nowadays doing a bunch of part-time contracting to help various EA organizations with writing projects, recruiting, and other random stuff. I work a lot less hard in EA than I did as an aerospace engineer—like I said, I’m pretty lazy, plus I now have a toddler to take care of.)
I view deontological thinking as most powerful as a coordination mechanism for society to enforce standards of moral behavior. So instead of constantly dreaming up new personal moral rules for myself (although like everybody I have a few idiosyncratic personal rules that I try to stick to), I try to uphold the standards of moral behavior that are broadly shared by my society. This means stuff like not breaking the law (except for weird situations where the law is clearly unjust), but also more unspoken-moral-obligation stuff like supporting family members, plus a bunch of kantian-logic stuff like respecting norms, not littering, etc (ie, if it would be bad if everyone did X, then I shouldn’t do X).
But when it comes to pushing for new moral norms (like many of the proposed boycott ideas) rather than respecting existing moral norms, I’m less enthusiastic. I do often try to be helpful towards these efforts on the margin, since “marginal charity” is cheap. (At least I do this when the new norm seems actually-good, and isn’t crazy virtue-signaling spirals like for example the paper-straws thing, or counterproductive in other ways like just sapping attention from more important endeavors or distracting from the real cause of a problem.) But it usually doesn’t seem “morally obligatory” (ie, in my view of how to use deontology, “very important for preserving the moral fabric of society and societal trust”) to go to great lengths to push super-hard for the proposed new norms. Nor does it usually seem like the most important thing I could be doing. So beyond a token, marginal level of support for new norms that seem nice, I usually choose to focus my “deliberately trying to be a good person” effort on trying to do whatever is the most important thing I could be doing!
Thoughts on Longtermism
I think your final paragraph is mixing up two things that are actually separate:
1. “I’m not denying [that x-risks are important] but these seem like issues far beyond the influence of any individual person. They are mainly the domain of governments, policymakers… [not] individual actions.”
2. “By contrast, donating to save kids from malaria or starvation has clear, measurable, immediate effects on saving lives.”
I agree with your second point that sadly, longtermism lacks clear, measurable, immediate effects. Even if you worked very hard and got very lucky and accomplished something that /seems/ like it should be obviously great from a longtermist perspective (like, say, establishing stronger “red phone”-style nuclear hotline links between the US and Chinese governments), there’s still a lot of uncertainty about whether this thing you did (which maybe is great “in expectation”) will actually end up being useful (maybe the US and China never get close to fighting a nuclear war, nobody ever uses the hotline, so all the effort was for naught)! Even in situations where we can say in retrospect that various actions were clearly very helpful, it’s hard to say exactly HOW helpful. Everything feels much more mushy and inexact.
Longtermists do have some attempted comebacks to this philosophical objection, mostly along the lines of “well, your near-term charity, and indeed all your actions, also affect the far future in unpredictable ways, and the far future seems really important, so you can’t really escape thinking about it”. But also, on a much more practical level, I’m very sympathetic to your concern that it’s much harder to figure out where to actually donate money to make AI safety go well than to improve the lives of people living in poor countries or help animals or whatever else—the hoped-for paths to impact in AI are so much more abstract and complicated, one would have to do a lot more work to understand them well, and even after doing all that work you might STILL not feel very confident that you’ve made a good decision. This very situation is probably the reason why I myself (even though I know a ton about some of these areas!!) haven’t made more donations to longtermist cause areas.
But I disagree with your first point, that it’s beyond the power of individuals to influence x-risks or do other things to make the long-term future go well, rather it’s up to governments. And I’m not just talking about individual crazy stories like that one time when Stanislav Petrov might possibly have saved the world from nuclear war. I think ordinary people can contribute in a variety of reasonably accessible ways:
I think it’s useful just to talk more widely about some of the neglected, weird areas that EA works on—stuff like the risk of power concentration from AI, the idea of “gradual disempowerment” over time, topics like wild animal suffering, the potential for stuff like prediction markets and reforms like approval voting to improve the decisionmaking of our political institutions, et cetera. I personally think this stuff is interesting and cool, but I also think it’s societally beneficial to spread the word about it. Bentham’s Bulldog is, I think, an inspiring recent example of somebody just posting on the internet as a path to having a big impact, by effectively raising awareness of a ton of weird EA ideas.
If you’re just like “man, this x-risk stuff is so fricking confusing and disorienting, but it does seem like in general the EA community has been making an outsized positive contribution to the world’s preparedness for x-risks”, then there are ways to support the EA community broadly (or other similar groups that you think are doing good) -- either through donations, or potentially through, like, hosting a local EA meetups, or (as I do) trying to make a career out of helping random EA orgs with work they need to get done.
Some potential EA cause areas are niche enough that it’s possible to contribute real intellectual progress by, again, just kinda learning more about a topic where you maybe bring some special expertise or unique perspective to an area, and posting your own thoughts / research on a topic. Your own post (even though I disagree with it) is a good example of this, as are so many of the posts on the Forum! Another example that I know well is the “EcoResilience Initiative”, a little volunteer part-time research project / hobby run by my wife @Tandena Wagner—she’s just out there trying to figure out what it means to apply EA-style principles (like prioritizing causes by importance, neglectedness, and tractability) to traditional environmental-conservation goals like avoiding species extinctions. Almost nobody else is doing this, so she has been able to produce some unique, reasonably interesting analysis just by sort of… sitting down and trying to think things through!
Now, you might reasonably object: “Sure, those things sound like they could be helpful as opposed to harmful, but what happened to the focus on helping the MOST you possibly can! If you are so eager to criticize the idea of giving up chocolate in favor of the hugely more-effective tactic of just donating some money to givewell top charities, then why don’t you also give up this speculative longtermist blogging and instead try to earn more money to donate to GiveWell?!” This is totally fair and sympathetic. In response I would say:
Personally I am indeed convinced by the (admittedly weird and somewhat “fanatical”) argument that humanity’s long-term future is potentially very, very important, so even a small uncertain effect on high-leverage longtermist topics might be worth a lot more than it seems.
I also have some personal confidence that some of the random, very-indirect-path-to-impact stuff that I get up to, is indeed having some positive effects on people and isn’t just disappearing into the void. But it’s hard to communicate what gives me that confidence, because the positive effects are kind of illegible and diffuse rather than easily objectively measurable.
I also happen to be in a life situation where I have a pretty good personal fit for engaging a lot with longtermism—I happen to find the ideas really fascinating, have enough flexibility that I can afford to do weird part-time remote work for EA organizations instead of remaining in a normal job like my former aerospace career, et cetera. I certainly would not advise any random person on the street to quit their job and try to start an AI Safety substack or something!!
I do think it’s good (at least for my own sanity) to stay at least a little grounded and make some donations to more straightforward neartermist stuff, rather than just spending all my time and effort on abstract longtermist ideas, even if I think the longtermist stuff is probably way better.
Overall, rather than the strong and precise claim that “you should definitely do longtermism, it’s 10,000x more important than anything else”, I’d rather make the weaker, broader claims that “you shouldn’t just dismiss longtermism out of hand; there is plausibly some very good stuff here” and that “regardless of what you think of longermism, I think you should definitely try to adopt more of an EA-style mindset in terms of being scope-sensitive and seeking out what problems seem most important/tractable/neglected, rather than seeing things too much through a framework of moral obligations and personal sacrifice, or being unduly influenced by whatever controversies or moral outrages are popular / getting the most news coverage / etc.”
Hi, thanks again for the detailed reply — I really appreciate the clarity. I’m finding it genuinely eye-opening that many issues I assumed were morally significant turn out to matter far less in practice once scale and impact are properly quantified. I think I was heavily influenced by various online movements that are very loud and visible, so it confused me that EA rarely foregrounded topics like slave labor in chocolate or Coca-Cola’s water practices, despite covering other global issues such as malaria.
One thing I do want to clarify is that there are ethical chocolate companies using fair-trade, non–child-labor supply chains, so it’s not that “all chocolate must be boycotted,” but rather that many major brands have problematic sourcing. Still, your calculations make it clear that a solo boycott makes essentially no difference to the working hours or conditions of any child laborer, and similarly an individual boycott won’t meaningfully affect things like water extraction by Coca-Cola in Africa or India.
I am also not sure what you mean regarding your calculations about buying a child’s freedom for $9 per hour and also the whole valuing your own hour by $20-30. I mean to be honest from a consequentialist perspective there isn’t a difference between personally doing harm or letting harm to continue but in this case you aren’t I guess buying a child’s freedom you are just not forcing them to work an additional hour if that reframing makes sense. Kinda like veganism doesn’t save lives its just not about killing additional lives. To put it quite simply, by refusing to by slave labor chocolate you are not really helping people you are just not hurting them (and buying the chocolate leads to harm).
On the broader question of morally permissible actions, I’ve been strongly shaped by this Aeon article (“Why it is better not to aim at being morally perfect”). I agree that doing genuine moral good matters, but being a 10⁄10 moral saint is neither realistic nor psychologically healthy. That’s why I find Schelling points useful — for example, the 10% pledge. Without Schelling points, it feels like the only consistent utilitarian answer would be to live extremely frugally and donate almost everything. So my original question was really about which actions rise to the level of meaningful Schelling points. It seems that many things that online activists frame as huge moral imperatives (boycotting certain products, etc.) actually have very small expected impact and thus probably don’t qualify.
On veganism: I’ve been extremely strict (even avoiding foods with small amounts of egg or dairy, even while traveling), but seeing that roughly 75% of the EA Forum isn’t vegan does make me wonder whether relaxing a bit would still be morally acceptable. At the same time, I’m not fully comfortable with an attitude of “it’s fine to cause some harm as long as I donate to GiveWell later,” since that can be used to rationalize almost anything (e.g., “I’ll do X harmful thing like murder a man and offset it with $5k to AMF”). I understand the logic in small, low-impact cases, but taken broadly it seems like a slippery ethical framing.
A (slightly personal) question: do you think one could argue that you might actually have more impact as an aerospace engineer donating 10% of your income than by doing local EA organization work? I imagine it depends heavily on the quality of the contributions and the kinds of community-building work being done, but I’m curious how you think about that tradeoff.
Regarding longtermism: I’ll admit I’m somewhat biased. I’ve absorbed a lot of the “nothing ever happens” attitude, so doomsday scenarios often feel exaggerated to me. But setting that aside, I can acknowledge that global catastrophic risks like nuclear conflict, pandemics, and climate instability are real and non-zero. We literally just lived through a pandemic. My concern is that nearly all meaningful action in these areas ultimately seems to run through political institutions. Research can help, but if political leaders are uninformed or uninterested, the marginal value of EA research feels limited. That sense might also be influenced by my experience with college ethics classes — AI ethics, especially, often felt detached from real-world levers.
Realistically, it seems like the most impactful thing an individual can do for x-risk at the moment is vote for politicians who take these issues seriously, but politicians who are aware of (or influenced by) effective altruism seem rare.
Finally, several of the replies have made me think about the prisoner’s dilemma dynamic underlying many collective-action problems. With things like chocolate, it seems like individual action is (almost) negligible. Veganism is different because the per-unit harm is much larger. But I’m curious how EA generally thinks about prisoner’s dilemmas in areas like climate change, voting, or even the Donation Election. Why should I vote in the Donation Election if my individual vote is almost certainly not decisive? Or more broadly, when do extremely low-probability marginal contributions still matter?
Thanks again — the discussion has been really helpful in clarifying what actually matters versus what merely feels morally salient.
It will tell you how many animals are raised for your food, depending on your dietary type, and how much money would be needed in donations to offset that.
The moral upshot here is that eggs are far worse than dairy from an animal welfare perspective, mostly because cows are a lot larger than chickens. So if you feel like adding animal products to make your life convenient but worry about suffering, add dairy products.
And also donate to effective animal charities. There’s no reason to stick to the few $ per month (or fraction of a $ if it’s dairy) needed to offset the suffering from your diet—you can do much more good than that. Most EAs aren’t really into offsetting. We don’t actually think you should donate less to something because it’s more effective. This is just a calculator to attempt to explain more broadly why effective animal advocacy giving is good.
I skimmed through the website, and I’m not entirely sure how they’re calculating the dollar amounts. The comparisons also seem somewhat subjective, and some of the proposed impacts (e.g., creating more plant-based meat options) don’t obviously translate into measurable reductions in meat consumption.
I’m also not sure what they mean by this statement:
“We don’t actually think you should donate less to something because it’s more effective.”
“We don’t actually think you should donate less to something because it’s more effective.”
(All the below numbers are made up for example purposes and don’t represent the cost of chicken-related interventions)
Let’s say that I want to have chicken for dinner tonight. However, I don’t want to cause chickens to suffer. I have worked out that by donating $0.10 to Chicken Charity A I can prevent the same amount of suffering that eating a chicken dinner would cause, so I do that. Then I find out that Chicken Charity B can do the same thing for $0.05, so I do that instead for tomorrow night’s chicken dinner. A charity being 2x as effective means I donate half as much to it. This is the “offsetting” mindset.
Effective Altruists do not (usually) think this way. We don’t consider our donations as aiming to do a fixed amount of good and maximise effectiveness in order to reduce the amount we have to donate. We do it the other way around, usually: a fixed amount that is set by our life circumstances (e.g. the 10% pledge) and maximising the effectiveness of that in order to do as much good as possible.
Hi; thanks for this thoughtful reply!
I agree that with chocolate and exploited labor, the situation is similar to veganism insofar as if you buy some chocolate, then (via the mechanisms of supply and demand) that means more chocolate is gonna be harvested (although not necessarily by harvested by that particular company, right? so I think the argument works best only if the entire field of chocolate production is shot through with exploited labor?). Although, as Toby Chrisford points out in his comment, not all boycott campaigns are like this.
Thoughts on chocolate in particular
Reading the wikipedia page for chocolate & child labor, I agree that this seems like a more legit cause than “water privatization” or some of the other things I picked on. But if you are aiming for a veganism-style impact through supply and demand, it makes more sense to boycott chocolate in general, not a specific company that happens to make chocolate. (Perplexity says that Nestle controls only a single-digit percentage of the world’s chocolate market, “while the vast majority is produced by other companies such as Mars, Mondelez, Ferrero, and Hershey”—nor is Nestle even properly described as a chocolate company, since only about 15% of their revenue comes from chocolate! More comes from coffee, other beverages, and random other foods.)
In general I just get the feeling that you are choosing what to focus on based on which companies have encountered “major controversies” (ie charismatic news stories), rather than making an attempt to be scope-sensitive or thinks strategically.
“With something like slave labor in the chocolate supply chain, the impact of an individual purchase is very hard to quantify.”
Challenge accepted!!! Here are some random fermi calculations that I did to help me get a sense of scale on various things:
Google says that the average american consumes 100 lbs of chicken a year, and broiler chickens produce about 4 lbs of meat, so that’s 25 broiler chickens per year. Broiler chickens only live for around 8 weeks, so 25 chickens = at any given time, about four broiler chickens are living in misery in a factory farm, per year, per american. Toss in 1 egg-laying hen to produce about 1 egg per day, that’s five chickens per american.
How bad is chicken suffering? Idk, not that bad IMO, chickens are pretty simple. But I’m not a consciousness scientist (and sadly, nor is anybody else), so who knows!
Meanwhile with chocolate, the average american apparently consumes about 15 pounds of chocolate per year. (Wow, that’s a lot, but apparently europeans eat even more??) The total worldwide market for chocolate is 16 billion pounds per year. Wikipedia says that around 2 million children are involved in child-labor for harvesting cocoa in West Africa, while Perplexity (citing this article) estimates that “Including farmers’ families, workers in transport, trading, processing, manufacturing, marketing, and retail, roughly 40–50 million people worldwide are estimated to depend on the cocoa and chocolate supply chain for their income or employment.”
So the average American’s share of global consumption (15 / 16 billion, or about 1 billionth) is supporting the child labor of 2 million / 1 billion = 0.002 West African children. Or, another way of thinking about this is that (assuming child laborers work 12-hour days every day of the year, which is probably wrong but idk), the average American’s yearly chocolate consumption supports about 9 hours of child labor, plus about 180 hours of labor from all the adults involved in “transport, trading processing, manufacturing, marketing, and retail”, who are hopefully mostly all legitly-employed.
Sometimes for a snack, I make myself a little bowl of mixed nuts + dark chocolate chips + blueberries. I buy these little 0.6-pound bags of dark chocolate chips for $4.29 at the grocery store (which is about as cheap as it’s possible to buy chocolate); each one will typically last me a couple months. It’s REALLY dark chocolate, 72% cacao, so maybe in terms of child-labor-intensity, that’s equivalent to 4x as much normal milk chocolate, so child-labor-equivalent to like 2.5 lbs of milk chocolate? So each of these bags of dark chocolate involves about 1.5 hours of child labor.
The bags cost $4.29, but there is significant consumer surplus involved (otherwise I wouldn’t buy them!) Indeed, I’d probably buy them even if they cost twice as much! So let’s say that the cost of my significantly ycutting back my chocolate consumption is about $9 per bag.
So if I wanted to reduce child labor, I can buy 1 hour of a child’s freedom at a rate of about $9 per bag / 1.5 hours per bag = $6 per hour. (Obviously I can only buy a couple hours this way, because then my chocolate consumption would hit zero and I can’t reduce any more.)
That’s kind of expensive, actually! I only value my own time at around $20 - $30 per hour!
And it looks doubly expensive when you consider that givewell top charities can save an african child’s LIFE for about $5000 in donations—assuming 50 years life expectancy and 16 hours awake a day, that’s almost 300,000 hours of being alive versus dead. Meanwhile, if me and a bunch of my friends all decided to take the hit to our lifestyle in the form of foregone chocolate consumption instead of antimalarial bednet donations, that would only free up something like 833 hours of an african child doing leisure versus labor (which IMO seems less dramatic than being alive versus dead).
One could imagine taking a somewhat absurd “offsetting” approach, by continuing to enjoy my chocolate but donating 3 cents to Against Malaria Foundation for each bag of chocolate I buy—therefore creating 1.8 hours of untimely death --> life in expectation, for every 1.5 hours of child labor I incur.
Sorry to be “that guy”, but is child labor even bad in this context? Is it bad enough to offset the fact that trading with poor nations is generally good?
Obviously it’s bad for children (or for that matter, anyone), who ought to be enjoying their lives and working to fulfill their human potential, to be stuck doing tedious, dangerous work. But, it’s also bad to be poor!
Most child labor doesn’t seem to be slavery—the same wikipedia page that cites 2 million child laborers says there are estimated to be only 15,000 child slaves. (And that number includes not just cocoa, but also cotton and coffee.) So, most of it is more normal, compensated labor. (Albeit incredibly poorly compensated by rich-world standards—but that’s everything in rural west africa!)
By analogy with classic arguments like “sweatshops are good actually, because they are an important first step on the ladder of economic development, and they are often a better option for poor people than their realistic alternatives, like low-productivity agricultural work”, or the infamous Larry Summers controversy (no, not that one, the other one. no, the OTHER other one. no, not that one either...) about an IMF memo speculating about how it would be a win-win situation for developed countries to “export more pollution” to poorer nations, doing the economic transaction whereby I buy chocolate and it supports economic activity in west africa (an industry employing 40 million people, only 2 million of whom are child laborers) seems like it might be better than not doing it. So, the case for a personal boycott of chocolate seems weaker than a personal boycott of factory-farmed meat (where many of the workers are in the USA, which has much higher wages and much tighter / hotter labor markets).
“I am genuinely curious about what you consider to fall within the realm of morally permissible personal actions.”
This probably won’t be a very helpful response, but for what it’s worth:
I don’t think the language of moral obligations and permissibility and rules (what people call “deontology”) is a very good way to think about these issues of diffuse, collective, indirect harms like factory farming or labor exploitation.
As you are experiencing, deontology doesn’t offer much guidance on where to draw the line when it comes to increasingly minor, indirect, or incidental harms.
It’s also not clear what to do when there are conflicting effects at play—if an action is good for some reasons but also bad for other reasons.
Deontology doesn’t feel very scope-sensitive—it just says something like “don’t eat chocolate if child labor is involved!!” and nevermind if the industry is 100% child labor or 0.01% child labor. This kind of thinking seems to have a tendency to do the “copenhagen theory of ethics” thing where you just pile on more and more rules in an attempt to avoid being entangled with bad things, when instead it should be more concerned with identifying the most important bad things and figuring out how to spend extra energy addressing those, even while letting some more minor goals slide.
I think utilitarianism / consequentialism is a better way to think about diffuse, indirect harms, because it’s more scope-sensitive and it seems to allow for more grey areas and nuance. (Deontology just says that you must do some things and mustn’t do other forbidden things, and is neutral on everything else. But consequentialism rates actions on a spectrum from super-great to super-evil, with lots of medium shades in-between.) It’s also better at balancing conflicting effects—just add them all up!
Of course, trying to live ordinary daily life according to 100% utilitarian thinking and ethics feels just as crazy as trying to live life according to 100% deontological thinking. Virtue ethics often seems like a better guide to the majority of normal daily-life decisionmaking: try to behave honorably, try to be be caring and prudent and et cetera, doing your best to cultivate and apply whatever virtues seem most relevant to the situation at hand.
Personally, although I philosophically identify as a pretty consequentialist EA, in real life I (and, I think, many people) rely on kind of a mushy combination of ethical frameworks, trying to apply each framework to the area where it’s strongest.
As I see it, that’s virtue ethics for most of ordinary life—my social interactions, how I try to motivate myself to work and stay healthy, what kind of person I aim to be.
And I try to use consequentialist / utilitarian thinking to figure out “what are some of the MOST impactful things I could be doing, to do the MOST good in the world”. I don’t devote 100% of my efforts to doing this stuff (I am pretty selfish and lazy, like to have plenty of time to play videogames, etc), but I figure if I spend even a smallish fraction of my time (like 20%) aimed at doing whatever I think is the most morally-good thing I could possibly do, then I will accomplish a lot of good while sacrificing only a little. (In practice, the main way this has played out in my actual life is that I left my career in aerospace engineering in favor of nowadays doing a bunch of part-time contracting to help various EA organizations with writing projects, recruiting, and other random stuff. I work a lot less hard in EA than I did as an aerospace engineer—like I said, I’m pretty lazy, plus I now have a toddler to take care of.)
I view deontological thinking as most powerful as a coordination mechanism for society to enforce standards of moral behavior. So instead of constantly dreaming up new personal moral rules for myself (although like everybody I have a few idiosyncratic personal rules that I try to stick to), I try to uphold the standards of moral behavior that are broadly shared by my society. This means stuff like not breaking the law (except for weird situations where the law is clearly unjust), but also more unspoken-moral-obligation stuff like supporting family members, plus a bunch of kantian-logic stuff like respecting norms, not littering, etc (ie, if it would be bad if everyone did X, then I shouldn’t do X).
But when it comes to pushing for new moral norms (like many of the proposed boycott ideas) rather than respecting existing moral norms, I’m less enthusiastic. I do often try to be helpful towards these efforts on the margin, since “marginal charity” is cheap. (At least I do this when the new norm seems actually-good, and isn’t crazy virtue-signaling spirals like for example the paper-straws thing, or counterproductive in other ways like just sapping attention from more important endeavors or distracting from the real cause of a problem.) But it usually doesn’t seem “morally obligatory” (ie, in my view of how to use deontology, “very important for preserving the moral fabric of society and societal trust”) to go to great lengths to push super-hard for the proposed new norms. Nor does it usually seem like the most important thing I could be doing. So beyond a token, marginal level of support for new norms that seem nice, I usually choose to focus my “deliberately trying to be a good person” effort on trying to do whatever is the most important thing I could be doing!
Thoughts on Longtermism
I think your final paragraph is mixing up two things that are actually separate:
1. “I’m not denying [that x-risks are important] but these seem like issues far beyond the influence of any individual person. They are mainly the domain of governments, policymakers… [not] individual actions.”
2. “By contrast, donating to save kids from malaria or starvation has clear, measurable, immediate effects on saving lives.”
I agree with your second point that sadly, longtermism lacks clear, measurable, immediate effects. Even if you worked very hard and got very lucky and accomplished something that /seems/ like it should be obviously great from a longtermist perspective (like, say, establishing stronger “red phone”-style nuclear hotline links between the US and Chinese governments), there’s still a lot of uncertainty about whether this thing you did (which maybe is great “in expectation”) will actually end up being useful (maybe the US and China never get close to fighting a nuclear war, nobody ever uses the hotline, so all the effort was for naught)! Even in situations where we can say in retrospect that various actions were clearly very helpful, it’s hard to say exactly HOW helpful. Everything feels much more mushy and inexact.
Longtermists do have some attempted comebacks to this philosophical objection, mostly along the lines of “well, your near-term charity, and indeed all your actions, also affect the far future in unpredictable ways, and the far future seems really important, so you can’t really escape thinking about it”. But also, on a much more practical level, I’m very sympathetic to your concern that it’s much harder to figure out where to actually donate money to make AI safety go well than to improve the lives of people living in poor countries or help animals or whatever else—the hoped-for paths to impact in AI are so much more abstract and complicated, one would have to do a lot more work to understand them well, and even after doing all that work you might STILL not feel very confident that you’ve made a good decision. This very situation is probably the reason why I myself (even though I know a ton about some of these areas!!) haven’t made more donations to longtermist cause areas.
But I disagree with your first point, that it’s beyond the power of individuals to influence x-risks or do other things to make the long-term future go well, rather it’s up to governments. And I’m not just talking about individual crazy stories like that one time when Stanislav Petrov might possibly have saved the world from nuclear war. I think ordinary people can contribute in a variety of reasonably accessible ways:
I think it’s useful just to talk more widely about some of the neglected, weird areas that EA works on—stuff like the risk of power concentration from AI, the idea of “gradual disempowerment” over time, topics like wild animal suffering, the potential for stuff like prediction markets and reforms like approval voting to improve the decisionmaking of our political institutions, et cetera. I personally think this stuff is interesting and cool, but I also think it’s societally beneficial to spread the word about it. Bentham’s Bulldog is, I think, an inspiring recent example of somebody just posting on the internet as a path to having a big impact, by effectively raising awareness of a ton of weird EA ideas.
If you’re just like “man, this x-risk stuff is so fricking confusing and disorienting, but it does seem like in general the EA community has been making an outsized positive contribution to the world’s preparedness for x-risks”, then there are ways to support the EA community broadly (or other similar groups that you think are doing good) -- either through donations, or potentially through, like, hosting a local EA meetups, or (as I do) trying to make a career out of helping random EA orgs with work they need to get done.
Some potential EA cause areas are niche enough that it’s possible to contribute real intellectual progress by, again, just kinda learning more about a topic where you maybe bring some special expertise or unique perspective to an area, and posting your own thoughts / research on a topic. Your own post (even though I disagree with it) is a good example of this, as are so many of the posts on the Forum! Another example that I know well is the “EcoResilience Initiative”, a little volunteer part-time research project / hobby run by my wife @Tandena Wagner—she’s just out there trying to figure out what it means to apply EA-style principles (like prioritizing causes by importance, neglectedness, and tractability) to traditional environmental-conservation goals like avoiding species extinctions. Almost nobody else is doing this, so she has been able to produce some unique, reasonably interesting analysis just by sort of… sitting down and trying to think things through!
Now, you might reasonably object: “Sure, those things sound like they could be helpful as opposed to harmful, but what happened to the focus on helping the MOST you possibly can! If you are so eager to criticize the idea of giving up chocolate in favor of the hugely more-effective tactic of just donating some money to givewell top charities, then why don’t you also give up this speculative longtermist blogging and instead try to earn more money to donate to GiveWell?!” This is totally fair and sympathetic. In response I would say:
Personally I am indeed convinced by the (admittedly weird and somewhat “fanatical”) argument that humanity’s long-term future is potentially very, very important, so even a small uncertain effect on high-leverage longtermist topics might be worth a lot more than it seems.
I also have some personal confidence that some of the random, very-indirect-path-to-impact stuff that I get up to, is indeed having some positive effects on people and isn’t just disappearing into the void. But it’s hard to communicate what gives me that confidence, because the positive effects are kind of illegible and diffuse rather than easily objectively measurable.
I also happen to be in a life situation where I have a pretty good personal fit for engaging a lot with longtermism—I happen to find the ideas really fascinating, have enough flexibility that I can afford to do weird part-time remote work for EA organizations instead of remaining in a normal job like my former aerospace career, et cetera. I certainly would not advise any random person on the street to quit their job and try to start an AI Safety substack or something!!
I do think it’s good (at least for my own sanity) to stay at least a little grounded and make some donations to more straightforward neartermist stuff, rather than just spending all my time and effort on abstract longtermist ideas, even if I think the longtermist stuff is probably way better.
Overall, rather than the strong and precise claim that “you should definitely do longtermism, it’s 10,000x more important than anything else”, I’d rather make the weaker, broader claims that “you shouldn’t just dismiss longtermism out of hand; there is plausibly some very good stuff here” and that “regardless of what you think of longermism, I think you should definitely try to adopt more of an EA-style mindset in terms of being scope-sensitive and seeking out what problems seem most important/tractable/neglected, rather than seeing things too much through a framework of moral obligations and personal sacrifice, or being unduly influenced by whatever controversies or moral outrages are popular / getting the most news coverage / etc.”
Hi, thanks again for the detailed reply — I really appreciate the clarity. I’m finding it genuinely eye-opening that many issues I assumed were morally significant turn out to matter far less in practice once scale and impact are properly quantified. I think I was heavily influenced by various online movements that are very loud and visible, so it confused me that EA rarely foregrounded topics like slave labor in chocolate or Coca-Cola’s water practices, despite covering other global issues such as malaria.
One thing I do want to clarify is that there are ethical chocolate companies using fair-trade, non–child-labor supply chains, so it’s not that “all chocolate must be boycotted,” but rather that many major brands have problematic sourcing. Still, your calculations make it clear that a solo boycott makes essentially no difference to the working hours or conditions of any child laborer, and similarly an individual boycott won’t meaningfully affect things like water extraction by Coca-Cola in Africa or India.
I am also not sure what you mean regarding your calculations about buying a child’s freedom for $9 per hour and also the whole valuing your own hour by $20-30. I mean to be honest from a consequentialist perspective there isn’t a difference between personally doing harm or letting harm to continue but in this case you aren’t I guess buying a child’s freedom you are just not forcing them to work an additional hour if that reframing makes sense. Kinda like veganism doesn’t save lives its just not about killing additional lives. To put it quite simply, by refusing to by slave labor chocolate you are not really helping people you are just not hurting them (and buying the chocolate leads to harm).
On the broader question of morally permissible actions, I’ve been strongly shaped by this Aeon article (“Why it is better not to aim at being morally perfect”). I agree that doing genuine moral good matters, but being a 10⁄10 moral saint is neither realistic nor psychologically healthy. That’s why I find Schelling points useful — for example, the 10% pledge. Without Schelling points, it feels like the only consistent utilitarian answer would be to live extremely frugally and donate almost everything. So my original question was really about which actions rise to the level of meaningful Schelling points. It seems that many things that online activists frame as huge moral imperatives (boycotting certain products, etc.) actually have very small expected impact and thus probably don’t qualify.
On veganism: I’ve been extremely strict (even avoiding foods with small amounts of egg or dairy, even while traveling), but seeing that roughly 75% of the EA Forum isn’t vegan does make me wonder whether relaxing a bit would still be morally acceptable. At the same time, I’m not fully comfortable with an attitude of “it’s fine to cause some harm as long as I donate to GiveWell later,” since that can be used to rationalize almost anything (e.g., “I’ll do X harmful thing like murder a man and offset it with $5k to AMF”). I understand the logic in small, low-impact cases, but taken broadly it seems like a slippery ethical framing.
A (slightly personal) question: do you think one could argue that you might actually have more impact as an aerospace engineer donating 10% of your income than by doing local EA organization work? I imagine it depends heavily on the quality of the contributions and the kinds of community-building work being done, but I’m curious how you think about that tradeoff.
Regarding longtermism: I’ll admit I’m somewhat biased. I’ve absorbed a lot of the “nothing ever happens” attitude, so doomsday scenarios often feel exaggerated to me. But setting that aside, I can acknowledge that global catastrophic risks like nuclear conflict, pandemics, and climate instability are real and non-zero. We literally just lived through a pandemic. My concern is that nearly all meaningful action in these areas ultimately seems to run through political institutions. Research can help, but if political leaders are uninformed or uninterested, the marginal value of EA research feels limited. That sense might also be influenced by my experience with college ethics classes — AI ethics, especially, often felt detached from real-world levers.
Realistically, it seems like the most impactful thing an individual can do for x-risk at the moment is vote for politicians who take these issues seriously, but politicians who are aware of (or influenced by) effective altruism seem rare.
Finally, several of the replies have made me think about the prisoner’s dilemma dynamic underlying many collective-action problems. With things like chocolate, it seems like individual action is (almost) negligible. Veganism is different because the per-unit harm is much larger. But I’m curious how EA generally thinks about prisoner’s dilemmas in areas like climate change, voting, or even the Donation Election. Why should I vote in the Donation Election if my individual vote is almost certainly not decisive? Or more broadly, when do extremely low-probability marginal contributions still matter?
Thanks again — the discussion has been really helpful in clarifying what actually matters versus what merely feels morally salient.
Re: veganism, have you seen the FarmKind compassion calculator? https://www.farmkind.giving/compassion-calculator#try-it
It will tell you how many animals are raised for your food, depending on your dietary type, and how much money would be needed in donations to offset that.
The moral upshot here is that eggs are far worse than dairy from an animal welfare perspective, mostly because cows are a lot larger than chickens. So if you feel like adding animal products to make your life convenient but worry about suffering, add dairy products.
And also donate to effective animal charities. There’s no reason to stick to the few $ per month (or fraction of a $ if it’s dairy) needed to offset the suffering from your diet—you can do much more good than that. Most EAs aren’t really into offsetting. We don’t actually think you should donate less to something because it’s more effective. This is just a calculator to attempt to explain more broadly why effective animal advocacy giving is good.
I skimmed through the website, and I’m not entirely sure how they’re calculating the dollar amounts. The comparisons also seem somewhat subjective, and some of the proposed impacts (e.g., creating more plant-based meat options) don’t obviously translate into measurable reductions in meat consumption.
I’m also not sure what they mean by this statement:
“We don’t actually think you should donate less to something because it’s more effective.”
(All the below numbers are made up for example purposes and don’t represent the cost of chicken-related interventions)
Let’s say that I want to have chicken for dinner tonight. However, I don’t want to cause chickens to suffer. I have worked out that by donating $0.10 to Chicken Charity A I can prevent the same amount of suffering that eating a chicken dinner would cause, so I do that. Then I find out that Chicken Charity B can do the same thing for $0.05, so I do that instead for tomorrow night’s chicken dinner. A charity being 2x as effective means I donate half as much to it. This is the “offsetting” mindset.
Effective Altruists do not (usually) think this way. We don’t consider our donations as aiming to do a fixed amount of good and maximise effectiveness in order to reduce the amount we have to donate. We do it the other way around, usually: a fixed amount that is set by our life circumstances (e.g. the 10% pledge) and maximising the effectiveness of that in order to do as much good as possible.