Yeah, recidivists reverted once, so it seems reasonable to expect they’re more likely to again. That makes the net impact of re-converting a recidivists unclear. Targeting them may be less valuable even if they’re much easier to convert.
davidc
Image is working for me now.
Image is not showing up for me still.
Is there any reason to share those details privately instead of being transparent in public?
Thanks for letting us know about this study!
I’ll second the request for details. Especially within EA, it’s pretty important to provide details (study plan, hopefully a pre-registration of the proposed analysis, the analysis itself, raw data, etc.) when mentioning study results like this.
The value in discussing the meaning of a word is pretty limited, and I recognize that this usage is standard in EA.
Still, I’ve done a pretty bad job explaining why I find it confusing. I’ll try again:
Suppose we had an organization with a mission statement like “improve the United States through better government.” And suppose they had decided that the best way to do that was to recommend that their members vote Republican and donate to the Republican Party. The mission is politically neutral, but it’d be pretty weird for the organization to call itself “politically neutral”.
This isn’t a criticism of Michelle’s post or GWWC, since their usage of the phrase is (as I now know) standard in EA. (Initially I was criticizing this post, but I was confused. Sorry!) Instead, it’s a criticism of how EA uses the term generally. The “EA definition” is different from a common-sense definition.
As I see it now, “X-neutral” is implicitly “X-neutral for some purpose Y”. The way EAs use “cause-neutral”, Y is basically “cause selection”. It means that EAs haven’t committed to a cause before they select a cause. That’s a good and useful part of EA, but it’s also pretty narrow and (I claim) not the most natural meaning of “cause-neutral” in all contexts.
“Cause-neutral” sounds like a phrase whose meaning you could understand based on a small amount of context, but really you need the special EA definition. This makes it jargon. Jargon can be helpful, but in this case I think it’s not.
That’s not really inconsistent with cause-neutrality, given Michelle’s definition (which I admit seems pretty common in EA).
(As long as GWWC is open to the possibility of working on something else instead, if something else seemed like a better way to help the world.)
Not really your fault. I’m starting to think the words inherently mean many things and are confusing.
Thanks for the posts.
Yep, we’re just using different definitions. I find your definition a bit confusing, but I admit that it seems fairly common in EA.
For what it’s worth, I think some of the confusion might be caused by my definition creeping into your writing sometimes. For example, in your next post (http://effective-altruism.com/ea/wp/why_poverty/):
“Given that Giving What We Can is cause neutral, why do we recommend exclusively poverty eradication charities, and focus on our website and materials on poverty? There are three main reasons …”
If we’re really using your definition, then that’s a pretty silly question. It’s like saying “If David is really cause neutral, then why is he focused on animals?” or “If Jeff is cause neutral, why does he donate to AMF?” Using your definition, there’s (as we’ve both pointed out) absolutely no tension between focusing on a cause and being cause neutral.
I’d suggest that we interpret “cause-neutrality” in a more straightforward, plain-language way: neutrality about what cause area you support; lack of commitment to any particular cause area.
As with your definition, cause-neutrality is a matter of degree. No one would be completely neutral across all possible causes. In an EA context, a “cause-neutral” EA person or organization might be just interested in furthering EA and not specifically interested in any of the particular causes more than others. But they might want to exclude some causes from EA, which is a limit on their “cause-neutrality”, and might be a thorny subject.
For example:
I don’t know who runs this website or how cause-neutral they are as an individual or organization, but it seems to be run in a pretty cause-neutral way.
I’m not cause-neutral (I’m almost exclusively focused on animals, because I think that’s where I can help the most).
Expanding the pledge made GWWC more cause-neutral, but looking at the website, the organization doesn’t come across as particularly cause-neutral to me.
Etc.
Hi Michelle--
I’m a bit confused. If cause-neutrality is “choos[ing] who to help by how much they can help”, then there are many individuals and organizations who seem to fit that definition who I wouldn’t ordinarily think of as cause-neutral. For example, many are focused exclusively on global health; many others are focused on animals; etc. Many of those with a cause-exclusive focus chose their focus using “how much they can help” as the criterion. Many of these came to different conclusions from others (either due to different values, differences in how they evaluated the evidence, etc.), which isn’t surprising.
I’m hesitant to try to define EA, but really your condition seems more appropriate to being part of a definition of EA than cause-neutrality.
If we accept your definition, then it seems like you could say “GWWC is cause-neutral, and also GWWC is exclusively focused on promoting global health and has no interest in any other EA causes.” (I don’t mean that’s what you did say or will say—it’s just something consistent with GWWC being cause-neutral in this sense.)
Edit:
Hmm, I’ve heard from a good friend that your definition is the one they’re familiar with in EA, and that they see no tension between being cause-neutral and entirely focused (at the moment) on one cause.
And the discussion on Jeff’s FB post: https://www.facebook.com/jefftk/posts/775488981742.
Especially helpful:
“Back when I was doing predictions for the Good Judgement Project this is something that the top forecasters would use all the time. I don’t recall it being thought inaccurate and the superforecasters were all pretty sharp cookies who were empirically good at making predictions.”
The name “Multiple Stage Fallacy” seems to encourage equivocation: Is it a fallacy to analyze the probability of an event by breaking it down into multiple stages, or a fallacy to make the mistakes Eliezer points to?
For the Nate Silver example, Eliezer does aim to point out particular mistakes. But for Jeff, the criticism comes sort of between these two possibilities: There’s a claim that Jeff makes these mistakes (which seems to be wrong—see Jeff’s reply), but it’s as if the mere fact of “Multiple Stages” means there’s no need to actually make an argument.
Robert, your charts are great. Adding one that compares “give in year 0” with “give in year 1″ would illustrate Carl’s point.
It seems like Rob is arguing against people using Y (the Pascal’s Mugging analogy) as a general argument against working on AI safety, rather than as a narrow response to X.
Presumably we can all agree with him on that. But I’m just not sure I’ve seen people do this. Rob, I guess you have?
Yeah, Matthews really should have replaced “CS” with “math” or “math and philosophy”.
That would be more accurate, more consistent with AI safety researchers’ self-conception, and less susceptible to some of these counterarguments (especially Julia’s point that in CS, the money and other external validation is much more in for-profits than AI safety).
This post seems to be about a mix of giving that you think isn’t the most effective (public radio) and giving that you think is (plausibly) the most effective but isn’t widely acknowledged to be effective within EA.
This is what we need: intelligent criticism of EA orthodoxy from the outside
Does SuperIntelligence really rise to the level of “EA orthodoxy”?
This might just be a nitpick, but it does really seem like we would want to avoid implying something too strong about EA orthodoxy.
Thanks!
I wanted to make a top-level post for it a few days ago but I need 5 more upvotes before I can create those. So I took the chance to share it here when I saw this “Open Thread”.
For what it’s worth, I think maybe this would be improved by some more information about the standards for application acceptance. (Apologies if that already exists somewhere that I haven’t been able to find.)
[Edited to remove the word “transparency”, which might have different connotations than I intended.]