Mechinterp researcher under Adrià Garriga-Alonso.
Thomas Kwa
> This is primarily the instrumental value of your enjoyment, right? Otherwise, you should compare your going vegan directly to the suffering of animals by not going vegan
I think you’re drawing the line in an unfair place between instrumental and inherent value. Most EAs I know are not so morally demanding on themselves as to have no self-interest. If someone is well-off in a non-EA job and donates 40% of their income to GiveWell or x-risk charities, they’re a fairly dedicated EA. But donating “only” 40% still implies a >10:1 income disparity between oneself and the global poor, and thus that one values one’s own enjoyment >50x more than that of an arbitrary human. I think the norm of being less than maximally demanding is beneficial to the EA community and protects against unproductive asceticism. So self-interest that looks inherent can actually be instrumental.
Epistemic status: I am a university student who has read a lot of EA material but has little knowledge about B1G1 programs. I thought carefully about this post for a few hours.
I think there’s a wide spectrum of possible effectiveness depending on implementation, but in practice they seem unlikely to be much more effective than the average non-EA charity, and a factor of at least 10 behind many EA causes.
Overall, the strictest forms of B1G1, where a company gives the exact same product they’re selling, seems gimmicky to me. The reason is that needs of people in the developing world are vastly different from those of the wealthy people buying the products. I think market forces might even dictate that these programs are not much more effective than direct cash transfers: If they were much more effective, the target population would be willing to buy them, which would cannibalize the sales of the company. [1] None of the 3 companies you list is so naive—they mostly outsource their work to charities. But this comes with its own problems: they don’t apply their own domain knowledge to their interventions.
Warby Parker works with Pupils Project and VisionSpring. Pupils Project operates in the US, so it’s unlikely they are cost-effective. VisionSpring at least works in Bangladesh. According to a [GiveWell interview][2], they do undercut commercial prices by a factor of 2 by selling glasses at cost for 150 taka ($1.77) [3], but I doubt that glasses are a leveraged intervention in the developing world. GiveWell does not currently recommend VisionSpring as a top or standout charity, instead recommending charities that can beat cash by a factor of 5-60 and are supported by very strong evidence.
TOMS has stopped distributing shoes in favor of donating 1⁄3 of their profits to a fund managed by their giving team. Their 2019 impact report is basically a marketing document full of infographics; it appears they make some attempt at evaluating impact of charities, but don’t follow effective altruist principles. For example, they fund projects in the US, and clean water programs (The Gates foundation has studied the water, sanitation, and hygiene sector extensively and finds better opportunities in sanitation).
P&G’s MNT vaccine program is through UNICEF, which is massively overfunded by comparison to charities recommended by GW and the Open Philanthropy Project.
There are more fundamental problems. The B1G1 website says they primarily evaluate causes by “progress of the project activity” and financial records; it’s likely they’re falling for the overhead myth and vastly underemphasizing the effectiveness of the cause area, which is left up to the company. EA has at least three branches where effective cause areas are found: global health/poverty, farm/wild animal welfare, and existential risk. It would be ideal if companies’ B1G1 programs either supported effective programs in one of these areas, or found a unique niche. B1G1 programs need to yield good PR, and sometimes have the additional constraint of providing a tangible product, so it appears they’re limited to a small subset of global health interventions, which in these three examples look no better than the average charity in terms of effectiveness. I don’t see any companies with B1G1 programs in farm or wild animal welfare, probably because it is politically contentious. Existential risk causes seem even less likely to yield good PR because they’re the exact opposite of the tangible transaction at the heart of B1G1. And B1G1 seems unlikely to let companies find a unique niche given that they’re outsourcing to nonprofits.
Finally, I have other concerns. B1G1 companies could be decreasing the amount given to more effective charities, which given that some charities are hundreds or thousands of times more effective than others, might cause net harm. They also might be using such programs to cover up being socially irresponsible (e.g. poor treatment of factory workers, or contributing to high-suffering animal agriculture).
Since this comment is rather long, I’ve split it into two, with the second comment directly answering the 12 questions.
[1]: See https://www.givewell.org/international/charities/income-raising-goods for why. Other GiveWell charities manage to outperform cash because they don’t sell commodities—individual families can’t buy a school deworming program.
[2]: https://files.givewell.org/files/conversations/VisionSpring_05-17-19_(public).pdf
[3]: Strangely, they sell glasses for $0.85 each on their website. Perhaps they have high distribution costs.
It looks like you’re fairly new to effective altruism, so you might want see my other comment or read the EA Handbook for more of the reasoning behind these answers.
1) I’m not affiliated with the CEA (nor are most of the people on this forum), but there are certainly forms of philanthropy more in line with the principles of effective altruism.
2) Effectiveness is often estimated as importance x neglectedness x tractability. There are good reasons for this, as when correctly formalized it’s an estimate of the total good one can do in the world; see below. I think most consumers are better off either buying from socially responsible non-B1G1 companies, or buying from any company and donating the money saved to either GW top charities (which rate much better in importance and neglectedness) or high-impact existential risk, farm animal welfare, or wild animal welfare causes, which can rate even better depending on your value system. https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/
3, 4) The incentives of B1G1 companies seem to push them towards relatively ineffective causes, and they might indirectly be causing net harm.
5) I would be happy if P&G switched from MNT to bednets. It’s possible the marketing could be equally good since malaria affects so many children under 5.
6) This is a valid criticism. Since highly effective causes are rare, any restriction makes it hard to find one.
7) Not sure.
8) I don’t think this is fair. Neonatal tetanus causes infant mortality, and the MNT vaccine reduces it, even if there are more effective causes to address. In general, addressing institutional/systemic issues can sometimes be more complicated and costly than directly attacking the problem.
9) Given that these companies aren’t currently giving to EA causes themselves, it’s hard for me to imagine such companies recommending them to consumers.
10) I’m skeptical of claims that millennials cause this or that trend because they’re such a broad group. But you could look at data for this one. For example, polls that ask about inclination towards buying B1G1 products, broken down by age range.
11) EA has grown over the last decade, but total donations as a percentage of GDP are more or less flat. If you mean the growth in EA, that’s too complex a question for me to answer.
12) Not sure.
Glad I could help. By the way, it came to my attention that GiveWell is investigating the cause area of providing glasses in developing countries: https://www.givewell.org/international/technical/programs/eyeglasses#How_cost-effective_is_the_program
This is promising, but I still endorse the general stance that B1G1-type programs have obstacles to overcome to reach effectiveness.
This type of content might be more suited to LessWrong, and you might get better feedback/engagement there.
Proportional representation?
Related: 80k podcast on patient philanthropy .
Have you heard the 80000 Hours podcast episode with Will MacAskill? The first hour has a decent exploration of asymmetries and similar deontological concerns, and MascAskill’s paralysis argument is a fairly good argument against them.
I think this could be more useful for people who are slightly downvoted, or whose posts just don’t get much attention. I remember a few recent highly-downvoted posts and comments (below −10 or so), and all of them seem to have well-written feedback; sometimes more thought was put into the feedback than the original post (not necessarily a bad thing, but going even further could be a massive waste of energy).
People who provide feedback also have to want to engage. On Stack Exchange, closing a question requires a reason, but mods and high-rep users are known to close poorly-written questions for vague reasons without providing much feedback. An even worse failure mode I see is if users are disincentivized from downvoting because they don’t want to be added to the feedback list.
I notice that I meant to link to this different episode on the non-identity problem but found it didn’t really fit and rationalized that away, so my comment may not be relevant.
I thought this talk was brilliant, not least in the specific terms you mentioned. I often talk to my EA friends about “counterfactual impact”, leverage, and “comparative advantage” and often have a hard time switching gears to talk to non-EAs, but I can imagine this slight shift in terminology to “cause-and-effect evidence”, leverage, and “personal advantage” to hit close to the core ideas and sound much friendlier. Most of the talk was immediately actionable as well. Thank you for making it.
Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. What’s your thought process for validating it? What’s your thought process if you choose to defer?
Is this still actively in use in September 2020?
I’m worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to “conventional” human values seems worse, for a variety of reasons:
There is no single “conventional morality”, and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a “weighted average” between these.
most people don’t think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world) or be absurd (placing higher moral weight on beings that are physically closer to you).
Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says “each age gets the thought it needs”.
However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with philosophers to cross-reference between these while fixing inconsistencies and removing values that seem to have an “unfair” competitive edge in the battle between ideas (whatever that means!).
The potential payoff seems huge, as it would expand the basis of EA moral reasoning from the intuitions of a tiny fraction of humanity to that of thousands of human cultures, and allow us to be more confident about our actions. Is there a reason this isn’t being done? Is it just too expensive?
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they’re consistent under some amount of reflection; another way could be checking that they’re consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I’ve upvoted some low quality criticism of EA. Some of this is due to emotional biases or whatever, but a reason I still endorse is that I haven’t read strong responses to some obvious criticism.
Example: I currently believe that an important reason EA is slightly uninclusive and moderately undiverse is because EA community-building was targeted at people with a lot of power as a necessary strategic move. Rich people, top university students, etc. It feels like it’s worked, but I haven’t seen a good writeup of the effects of this.
I think the same low-quality criticisms keep popping up because there’s no quick rebuttal. I wish there were a post of “fallacies about problems with EA” that one could quickly link to.
It’s not on the 80k list of “other global issues”, and doesn’t come up on a quick search of Google or this forum, so I’d guess not. One reason might be that the scale isn’t large enough—it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.
Here are my thoughts, which may sound overly critical, but are an honest attempt to communicate my ideas clearly.
When I start reading, I immediately notice two red flags:
The argument is formatted as a long manifesto by someone without a known track record of good epistemics. The manifesto claims to solve global cooperation, something many competent people have tried hard to solve.
The idea of a type of transformative knowledge that causes people to suddenly ignore their current incentives and start cooperating sounds fantastical.
Because of these red flags, I decide that the claim is extraordinary and you need to provide extraordinary evidence. From reading further, I notice further problems. To be clear, I don’t think patching these problems will save the thesis: I would still be skeptical due to the prior implausibility and lack of a clear, plausible plan for increasing the world’s empathy levels 10%.
Aligning everyone’s beliefs won’t solve conflict; you need to fix structural problems too.
If you could communicate obvious true beliefs and get people to internalize them properly, everyone would be an EA. A general method of communicating non-obvious true beliefs about the nature of reality to people, and getting them to act on it, sounds implausible.
You say “At some critical point a positive feedback loop will emerge so that every human becomes supersapient over time.” If this is the natural result of some small critical mass of people becoming supersapient, why has Buddhism not taken over the world with its millions of enlightened people over thousands of years of existence?
The version of this idea that is scaled back to be plausible to me sounds something like “Scientists should study the benefits of meditation more; with a LOT of funding and rigor this could possibly get past ‘does meditation work’ to identifying specific benefits and best practices. People should also practice meditation and, if they can safely, experiment with psychedelics, to better understand themselves and possibly become more rational and empathic.” That’s something I believe, but interventions may not be cost-effective enough to be an EA cause area. (There are EA-adjacent efforts to improve mental health in the developing world, but not many stand out as highly leveraged.)
Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn’t responded yet. Here’s the email I sent him:
Someone posted an article you co-authored in 2018 in the Stanford Arete Fellowship mentors group, and the conclusion that wild chimps had a higher welfare score than humans in India seemed off to me. I had the intuition that chimps can control their environment less well than human hunter-gatherers, plus have a less egalitarian social structure, plus the huge amount of infrastructure in food. This seemed like it could reveal either a surprising truth, or a methodological flaw or biases in the evaluators; I read through the full report and have some thoughts which I hope are constructive.
- The way humans are compared to non-humans seems too superficial. I think 6 points to humans in India vs 9 points in wild chimpanzees based on the high level of diagnosed disability among people in India is misleading, because we’ve spent billions more on diagnosing human diseases than chimps.
- Giving 0 points to humans in India for thirst/hunger/malnutrition, while chimps get 11, seems absurd for similar reasons. If we put as much effort into the diet of chimps as in the diets of wealthy humans to get a true reference point for health, I wouldn’t be surprised if more than 15% of chimps were considered malnourished. Also, the untreated drinking water consumed in India is used to support this rating, but though untreated water causes harm through disease, it shouldn’t be in the “thirst/hunger/malnutrition” category. [name of mentor] from the chat sums this up as there not being a ‘wealthy industrialized chimps’ group to contrast with.I’m wondering if you see these as important criticisms. Do you still endorse the overall results of the report enough that you think we should share it with mentees, and if so, should we add caveats?
The CEA founding team seems like the absolute best case for value drift, because to found CEA one must have a much higher baseline inclination towards EA than the average person. Also probably a lot of power, which helps them control their environment while many EAs would be forced into non-EA lifestyles by factors beyond their control. So 25% drifters of the original CEA team feels more scary to me than 40-70% of average EAs.