In fact, ignoring concerns about message complexity and not trying to be too fancy, I might suggest we eliminate any hard percentage standard in favor of a recommended % donation that scales with income. So somebody earning < $10,000-$20,000/year might be advised not to donate. Someone earning $80,000 might be asked to donate 10%. Someone earning $10,000,000/year might be asked to donate 90%. These are just rough numbers. But I think this might be better treated in book form or in tailored appeals to individual people. In the EA community I think it would be nice if we often discussed specific ways we could refine and tailor this community standard in a way that’s optimized for “it’s easy to understand how this number was computed yet it makes sense for me” rather than being optimized for compatibility with a sound bite in a media appearance.
DirectedEvolution
[Question] Does GiveWell list cost-per-QALY for their recommended charities?
Hi Jason, thank you for your response.
To the extent “you should only donate to effective charities” is being conveyed in practice, it’s not clear to me why deploying a 2⁄8 message is the most effective way to correct that mismessaging.
It’s certainly not the most effective way in all circumstances. I think that, on a substantial the margin, a 2⁄8 message would be more effective in many circumstances. I think a sophistated EA take would be that the real goal is to find a substantial yet isustainable level of giving to effective causes, one that is tailored to the individual’s material situation and the constituency of their moral parliament.
For people who haven’t encountered this complex bundle of ideas and aren’t going to give us a huge amount of time before writing us off, a 2⁄8 message gestures at subtlety of thought, implying moral parliamentarian ideas, the idea of distinguishing effective causes from personal passions, the idea of a substantial habit of charitable giving.
A 10% message hits the latter two points, but implies that we’re trying to frame the personal passions of the target of our giving appeal as unworthy targets of charitable giving. This is indeed the direct implication of the idea that altruistic efficiency follows a power law distribution—ineffective charities are massively worse than the best, and we lose huge value on the margin when we direct funds and energies to suboptimal causes.
But this totalizing view is one of the major reasons why even people who see the sense of cost-benefit calculations resist thinking in this manner. The implications are profoundly destabilizing if you don’t moderate them. So we moderate them. But when you get hit with that realization that there’s a whole community of people whom you haven’t met, who think in terms of cost/benefit altruistic calculations, and that the straight-line calculation is that you ought to give everything to a narrow band of super-effective causes and live on the level of the global poor, that borks the brain and causes people distance themselves.
A 10% standard addresses many of those concerns and is much better than the straight-line calculation by protecting people from utilitarian ravages and promoting movement growth. But its limit is that it seems to imply that symbolically, every other area in life is valueless and that our aim is to reduce the amount of non-EA charitable giving you engage in to zero. This is a bad argument but one that people reliably seem to come up with, because people aren’t all mathematically literate or careful reasoners the first time they encounter an idea. A 2⁄8 message addresses this specific way that the thought process and conversation can go wrong by saying: “Yes, your passion causes are valuable and we do approve of them, and you can keep giving to them, and that is a positive thing that you do. We are also asking you to recognize the sheer magnitude of clear, unambiguous good you can do by donating to things like X-risk prevention or malaria bednets and to really step up your donating in order to support these causes.” Putting hard numbers on it gives people a sense of the proportions we might consider appropriate as a community standard, which is why “2%/8%” and not just the qualitative description is a necessary part of the message, just as “10%” is necessary rather than “a very substantial level of giving.”
I struggle to come up with a rationale that gives someone the green checkmark of moral approval for giving 8% to effective causes + 2% for music for predominately rich people (i.e., opera) but denies the checkmark for just the 8% to effective causes.
I originally meant to include a metaphor that I think is a helpful reframing of the idea of a community standard/line in the sand/Schelling point, but apparently I never worked it into the main post.
I think of a community standard not as a rigid number that’s a pass/fail, but an elastic tether. We have anchored it at 10%. Obviously, for a solid earner in a country like the USA, being at 9% is a little worse, 11% a little better, but we don’t encourage you to “stretch the standard” all the way up to 50% (because it can set an intimidating or extremely demanding-sounding image of what our standard is and provoke a sense of being not good enough in ways that are bad for community growth) or all the way down to 0%. But a little variability around 10% really doesn’t matter much. The idea of the “elastic” standard is that it resists further deformation the further you try and move away from it. Having elasticity built into the standard makes it a better standard because it emphasizes the ways our community embraces flexibility and personal fit whlie still having actual, meaningful standards.
So yeah, full agree, no “green checkmark” mentality. More like a “green tether” or something like that.
Your link didn’t get pasted properly. Here it is: Lawful Uncertainty.
For example, can I do better than just deferring to the “largest and smartest” expert group on “Might AI lead to extinction?” (which seems to be EA). Can I instead look at the arguments and epistemics of EAs versus, say, opposing academics and reach a better conclusion? (Better in the sense of “more likely to be correct”.) If so, how much and how should I do that in the details?
Deference is a major topic in EA. I am currently working on a research project simulating various models of deference.
So far, my findings indicate that deference is a double-edged sword:
You will tend to have more accurate beliefs if you defer to the wisdom of the crowd (or perhaps to a subject-matter expert—I haven’t specifically modeled this yet).
However, remember that others are also likely to defer to you. If they fail to track the difference between your all-things-considered, deferent best guess and the independent observations and evidence you bring to the table, this can inhibit the community’s ability to converge on the truth.
If the community is extremely deferent and if there is about as much uncertainty about what the community’s collective judgment actually is as there is about the object-level question at hand, then it tentatively appears that it’s better even for individual accuracy to be non-deferent. It may be that there are even greater gains to be made just by being less deferent than the group.
Many of these problems can be resolved if the community has a way of aggregating people’s independent (non-deferent) judgments, and only then deferring to that aggregate judgment when making decisions. It seems to me progress can be made in this direction, though I’m skeptical we can come very close to this ideal.
So if your goal is to improve the community’s collective accuracy, it tentatively seems best to focus on articulating your own independent perspective. It is also good to seek this out from others, asking them to not defer and to give their own personal, private perspective.
But when it comes time to make your own decision, then you will want to defer to a large, even extreme extent to the community’s aggregate judgments.
Again, I haven’t included experts (or non-truth-oriented activists) into my model. I am also basing my model on specific assumptions about uncertainty, so there is plenty of generalization from a relatively narrow result going on here.
Hi mhendric,
Thanks for your feedback! Researching and writing up my posts takes enough time that adding in individual zoom calls on top would be tough—I work full time. Maybe at some point?
I see two strains of EA criticism. The one you point out comes from EA’s ideological opponents. That doesn’t mean they are bad, wrong, or that their lives revolve around some sort of other political activism. It means that they have decided on a different organizing ideology for their worldview that generates conclusions incompatible with EA’s way of looking at a variety of questions.
I don’t think it’s productive to try and persuade this group of people.
By contrast, I think a large number of people who are potential donors exist who can basically get behind EA ideas, but who would raise concerns along the lines that the 2%/8% pledge idea is meant to address. Here, the barrier isn’t an organized ideology incompatible with EA. It’s the way we organize our conversations with the public and the background perceptions that people have of EA before they encounter our people, books, websites and donation platforms, and before they’ve put more than a few seconds of thought into our ideas.
The reason I am carrying out this series of posts is partly because I think my proposal is genuinely useful and something we should be experimenting with. But it’s also because I think EA can benefit from kicking the tires of its central ideas. It helps us maintain a culture of open-mindedness, and it also exposes where an apparent community consensus and cogent framing of an issue in fact needs work. I continue to think this is the case in this area.
Update: based on analytics and timing, I now believe that there are one or two specific individuals (whose identities I don’t know) who are just strong-downvoting my posts without reading them.
While they may be doing this because they disagree with what they can glean of my conclusions from the intro, I do not consider this to be different from suppression of thought. I am not certain this is happening but it is the best explanation for the data I have at this time.
I continue to find the speed at which these posts accrue initial at least one initial early strong-downvote surprising and frustrating. Once again, I invite downvoters to articulate their disagreement in a comment. Based on analytics, I know it is one of the first 6 readers, and that they spent less than 5 minutes on this 11 minute article. I precommit to responding with a “thanks for your feedback” or something more positive and thoughtful than that.
I hope you will join me for further discussion and debate in my next post, where I dig in deeper to some of the objections you raised here!
Functions of a community standard in the 2%/8% fuzzies/utilons debate
As one additional note, first, thank you for linking to the survey about people’s familiarity with EA. Although I think it is probably useful evidence, and am extremely supportive of attempts to gather such evidence in general, one of my immediate concerns is that the data was gathered in April 2022.
This means the results predate both Will MacAskill’s high-profile publicity tour for What We Owe The Future as well as the downfall of FTX. My guess is that the number of people who have heard of Effective Altruism has increased substantially since then. The New York Times has 8.6 million digital subscribers and has covered EA a decent amount over the last year (often negatively), although I am confident that only a fraction of its subscribers read these articles.
What we can learn from it is how EA was perceived prior to these two important signal-boosting and reputation-altering events.
One specific relevant point is the figure for how many people have heard of GWWC relative to other EA orgs: it is the second-most-recognized of the institutions they asked about, at 4.1% of respondants (vs. 7.8% for GiveWell, the most recognized organization).
I am not a professional pollster, so my ability to parse the results in a sophisticated way is limited. But I give some deference to the idea of the Lizardman Constant—the idea that a small fraction (on the order of 2-5%) will endorse just about anything in the context of a poll, including the idea that Lizardmen rule the earth. As most of the results are roughly in this range, I have to treat the results with moderate skepticism.
Yes, I have tentative plans to conduct some interviews and MTurk surveys as a cheap and easy way to gather more empirical information. I don’t think these will resolve the question, but hopefully they will continue to elevate the discussion with critique that is less focused on convenience sampling and ad hoc interpretation by a potentially motivated debater (which is how I would criticize the quality of the evidence I present here).
That makes sense, and thank you for providing that context for your vote. Part of the challenge here is that our differences seem to be the result of more than one belief, which makes it challenging to parse the meaning of upvotes and agreevotes.
Thank you Isaac. Based on this post’s more positive reception, I’m more inclined to update in favor of your view.
Hi mhendric. First, thank you for your continued engagement and criticism—it sharpens my own thinking and encourages me to continue. I will respond in greater depth to some of the critiques you’ve made here in my next post.
Briefly:
My wording obviously has been muddy. My proposal is not a mandatory 2%-to-fuzzies-causes pledge, but a 10% pledge of which 80% is allocated to effective causes and 20% is explicitly to whatever cause the donor is passionate about. This discretionary 20%-of-the-10% (i.e. 2% of annual income) could also go to effective causes, but it could also go to the arts, alma maters, or anything else. In this way, this modification encompasses the original GWWC pledge, but also adds a flexible portion for those not comfortable with or who perceive absolutism in its structure.
I agree with you that we can guide EAs to a more sophisticated interpretation of the pledge internally. My concern about the current format of the pledge is that it misdirects conversations with non-EAs, prevents a deeper engagement with these ideas and giving habits, and contributes to EA’s perception of absolutism by that portion of the public that is aware of the movement at all. This is why having a concrete way to address these concerns seems beneficial for structuring conversations about these ideas, and also for increasing the amount of donations we are able to motivate for effective causes. I believe it would make EA a bigger tent community than it is at present.
While I agree strongly that much criticism of earning to give relates to concerns about net-negative professions and greenwashing, I also found in this research that a substantial portion of the critique is specifically about the 10% level and the idea of 100% donations to causes deemed effective. As examples, Trevor Noah, mimics the critique an ordinary person might make in a country without a social safety net, saying ‘maybe you in the UK can afford a 10% donation to charity, but I’m in the USA, where our healthcare is very expensive.’ The Kristof column I link questions the rule that 100% of donations would go to effective charities. These are also impressions non-EAs I’ve spoken with about EA have picked up in conversation and that I have struggled to address.
I agree that 10% is a Schelling point. I believe that a thorough understanding of the logic of Schelling points would overcome the slippery slope objection of “why not X+1%.” Where I believe you and I disagree is the idea that a Schelling point cannot be modified without destroying it. In my view, a Schelling point, once established, is like an elastic tether. The further away from the anchor point you go, the more resistance you meet. But if there are big benefits to marginal moves away from the exact tether point, then you should be able to do so. Metaphorically speaking, if Grand Central Station is the place to converge to find your friend when you’re both lost in New York City, you can sit on a park bench outside, but you can also get a (vegan) hotdog from the stand nearby. I believe that a 2%/8% or 10%/12% modification is comfortably close to the tether point to not break the Schelling point, while providing the benefits I have described.
Each critique you have made deserves a full post in reply, and I anticipate that some or all of them will as I continue this series. These paragraphs are just meant as compressed versions of my beliefs at this time, not comprehensive arguments.
Thank you for your response.
I completely agree that earning to give and the GWWC pledge are conceptually distinct. Ideally, anyone dealing with these ideas would treat them as such.
Where I disagree with you is that my post is conceptually ‘conflating’ these two ideas. Instead, my post is identifying that a bundle of associated ideas, including the GWWC pledge and earning to give, are prominent parts of EA’s reputation.
Here is an analogy to the point I am making:
When people think of engineering, they think of math, chemicals and robots.
When people think of Effective Altruism, they think of earning to give and donating 10% of your income to effective charities
The abstract relationship I am drawing with this analogy is that people who are not part of a specific community often have a shallow, almost symbolic view of major topics in the community. They do not necessarily come to a clear understanding of how all the parts fit together into a cohesive whole. My post is not at all arguing the virtues of earning to give or a 10% pledge. It is arguing that these two topics are part of a bundle of ideas that people associate with EA’s brand or reputation, in response to the debate suggested by the two seemingly contradictory claims I quoted at the top of the post.
I don’t think my post represents the critics it cites as saying donating 10% of one’s income to charity is a bad thing to do. What they critique is a perception of absolutism and the tension inherent in setting any specific standard for such a pledge, given various forms of inequality.
On the one hand, this doesn’t exactly reflect the true beliefs of EA thought leaders: MacAskill calls for the ultra-wealthy to donate as much as 99% of their income, and Giving What We Can has a Trial Pledge option, which is a way to make a smaller and more time-limited commitment. Nobody is stopping you from donating 10% to an effective charity and an extra 2% to the opera.
But psychologically, when people are processing the complex bundle of ideas that EA has to offer, in the context of a media appearance or magazine article, these conceptual distinctions can be lost. People really will come away with reactions like:
So you’re saying I have to donate at least 10% or I’m a bad person?
So you’re saying that everything I donate has to go to EA charities and I can’t donate to anything else?
So you’re saying that anything I donate to other causes is basically worthless compared to donating to EA causes?
So you’re saying that my knowledge and intuition about the charities I’m interested in and the good they do in the world is valueless compared to your big fancy spreadsheets?
So you’re saying that [my favorite charity] isn’t effective? What the hell do you know about it???
Isn’t everybody who’s donating to charity earning to give?
And EAs will argue with them in a way that exacerbates these conflicts.
Recognizing the ways that a call for a 10% donation to effective charities can have a negative psychological impact on potential donors, relative to a minor modification to a 2%/8% split, is what my articles are about more broadly. This specific post is just meant to look holistically at how the 10% pledge, and its bundle of associated ideas, people and organizations, is represented in media coverage of EA.
To readers of this post, I would like to note that a small number of people on the forum appear to be strong-downvoting my posts on this subject shortly after they are published. I don’t know specifically why, but it is frustrating.
For those of you who agree or disagree with my post, I hope you will choose to engage and comment on it to help foster a productive discussion. If you are a person who has chosen to strong-downvote any of the posts in this series, I especially invite you to articulate why—I precommit that my response will be somewhere between “thank you for your feedback” and something more positive and engaged than that.
See my new post for a partial response to this portion of your argument:
Firstly, I don’t see any benefit from the proposal. I don’t think the 10% norm forms a major part of EA’s public perception, so I don’t believe tweaking it would make any difference. If anything 2%/8% makes it more weird (not least because it no longer matches the tithing norm). You haven’t made any compelling argument for the reputational advantage to be gained either here or in your previous post, yet alone that this is the most effective way of gaining reputation.
See my new post for a partial response to this portion of your argument:
I’m not really seeing a dire need for this proposal. 10% effective donations has brand recognition and is a nice round number, as you point out. It is used by other groups, such as religious groups, making it easy to re-funnel donations to e.g. religious communities to effective charities. This leaves 90% of your income at your disposal, part of which you may spend on fuzzy causes. It does not seem required to me to change the 10% to allow for fuzzy donations, nor do I think there’s a motivation to make donations to fuzzy causes morally required.
I liked Lucretia’s initial response quite a bit. For a small bit of context about my personal identity and background, I’m a hetero cis man who has witnessed severe problems with sexual abuse in the rat/EA and also in other subcultures over the last 10 years, and has had many conversations with women on these issues.
The following are just my thoughts. Their length, the number of conjunctions, and the fact that I’m a hetero cis man makes me a little bit anxious about posting them, even though I think they are a carefully thought-out and constructive contribution. If the balance of opinion appears to be not just disagreement but a perception that this is counterproductive toward the project of seeking sexual justice in the rationalist/EA community, then I will delete it. So please be explicit (I will weight comments/PMs including reasons for your disagreement more heavily than karma in this decision).
Before I go on, I need to make one point extremely clear:
I will be talking about the idea of sexually “risky behavior” here. What this means specifically is flirtatious/sexual acts that have the potential to be perceived as consensual/desirable by both participants, but which lack explicit consent guardrails such as verbal declarations of consent, involve power imbalances, involve intoxication, or take place quickly enough that there are real risks of perceiving desire/consent when it’s not actually there.
When I talk about sexually “risky behavior,” I am not talking about a situation in which sexual abuse actually occurred. That’s not a risk—that is a disastrous, failed, violent outcome.
These are charged topics and I am not confident I’m doing an adequate job despite my best efforts, so I am happy to constructively respond to criticism of my language choices as well as the conceptual framework.
I’d start by jumping off this portion of Lucretia’s comment:
One of the challenge with many forms of sexual abuse (and harassment, which I’m here going to lump in with abuse), is that what makes an individual act of sexual abuse harmful is often much more contextual than what makes an individual act of non-sexual abuse harmful.
Consider some typical examples of non-sexual abuse:
Physical violence
Demeaning and disrespectful language
Neglect of a dependent
Stealing, financial coercion, or nonconsensual control of finances
Spreading negative false rumors to damage somebody’s reputation
All of these forms of non-sexual abuse typically involve “atomic” actions that are almost always intrinsically negative (i.e. they are not usually “enjoyable, an act of trust, and a celebration of life”). They are straightforwaredly brutal or cruel acts. The rare occasions on which some people might consider them acceptable typically require elaborate justifications (spanking children, fighting back against an abusive spouse, conservatorships, teasing, or inaccuracies in satirical biopics like Vice).
In escalating social interactions toward increased intimacy, participants frequently want some level of ambiguity or plausible deniability in order to avoid the embarassment and awkwardness of rejection. Communicating while preserving plausible deniability is a delicate art and also a generator of misunderstandings, as well as grotesque redpill-style distortions about how women think and feel about sex. It is this force that makes people want to flirt in conventional ways, rather than having the expectation that one can straightforwaredly ask about sexual interest and receive an honest, direct answer. Sexual abuse occurs in a context where plausible deniability is felt to be a crucial part of sexual negotiation, and where a common outcome of sexual negotiation are normal hurt feelings, embarassment, letdown, frustration and discomfort. This serves as camoflage for sexual abuse.
In contrast to non-sexual abuse, sexual abuse often (but not always) involves “atomic” actions where:
A substantial amount of context is required in order to understand why those individual acts were unequivocally harmful or inappropriate, or why the harms aren’t just normal negative feelings from a failed normal and healthy sexual negotiation, but are instead the result of sexual aggression or violence.
We may have to defer to the survivor(s) in order to learn that context, because adequate hard evidence does not exist.
In order to address these problems while minimizing new ones, we often rely on a few techniques:
Looking for a pattern of behavior by the alleged perpetrator.
Relying on sufficiently accurate stereotypes and patterns of behavior on a community level. For example, the proliferation of stories of women being sexually abused in EA/rationalist/AI-safety-adjacent Silicon Valley communities by high-status men lowers my threshold for believing an accusation against a high-status SV man in these communities, even if he has a previously unblemished reputation.
Installing community norms around what we regard as a low-risk vs. high-risk sexual interaction from a consent standpoint. For example, touching a woman a man has just met on the upper thigh within a minute of meeting her is a very high-risk form of sexual interaction.
Note that the challenge here is that some individuals will mutually want to engage in sexual activities regarded as high-risk by their communities, without the encumbrance of careful, explicit declarations of consent at each step along the way. Their ability to do so on a practical level will be impaired by the community’s perceptions of these activities as high-risk. This is a tradeoff the community as a whole makes, typically judging that the loss by adopting more conservative norms around sex is outweighted by the huge gains in terms of sexual safety. Furthermore, perceptions and the reality of large amounts of sexual abuse also makes sex more difficult, further tilting the norm toward greater conservatism.
Installing a norm that failure to avoid being accused of sexual abuse is sufficient grounds for punitive action by the community, though not necessarily by law enforcement, often with a lower threshold the more high-risk the sexual interaction was.
These techniques are controversial, both in theory and in practice, because they are vulnerable to adversarial exploitation—both by people bent on destroying reputations for personal gain and especially by perpetrators of sexual abuse bent on undermining investigations into their own abusive activities. There is a principled and an unprincipled critique, and it is easy to hide the unprincipled critique as a principled one.
The foregoing is not meant as an argument in favor of one approach or another to dealing with sexual abuse. Instead, it is intended as an answer to the question in the comment I’m responding to: what makes sexual abuse different from other forms of abuse? I believe an important defining difference is the problems I’ve articulated here. The result is that it feels qualitatatively very different to deal with sexual abuse than other forms of abuse.