Well, regarding Anthropic at least, this particular bet may be lucky, but if you make a bunch of high-variance bets and one of them turns out in your favor, is that still just luck?
Ian Turner
Thanks for sharing this report, and for all the work that went into this program so far.
Regarding the social desirability bias, and survey problems generally, there may be a few tweaks that would help with the situation.
Social desirability bias in surveys can be significantly reduced by using the “list experiment” technique.
There might we a way to phrase the question so that the social desirability bias goes the other way. For example, instead of asking “did you use the products”, you could ask “do you still have the products?”
If you ask people to keep the packaging after use, then you could ask to see it (and observe if it has been used, or not). This might also help estimate diversion.
Regarding the overlap with ANRiN, have you estimated the prior probability of that happening, given the size of the programs? It makes me wonder if there is a bias in the selection of treatment locations that makes this more likely, and which might also affect results in other ways. For example, maybe both organizations are selecting treatment locations with better transportation infrastructure, in which case the program might prove harder to scale in the future.
My sense is that EA, and especially GiveWell, made some enemies in the early days by shining a light on how badly the philanthropic sector was performing. So you got stuff like Charity Navigator describing Effective Altruism as an “elitist philosophy”. Particularly early on, GiveWell was not shy about criticizing big incumbents like UNICEF or Kiva. This probably helped attract attention, but I doubt it helped make friends.
Maybe this is inevitable — maybe only an outsider movement would be able to accomplish what GiveWell has — but it’s not so surprising that an industry being disrupted has negative feelings about the disrupters.
Needless to say there is a sad truth here, that since most foundations are not accountable to effectiveness they can put their own feelings first.
That citation is retracted?
Is there a meta-analysis studying the effect size of this intervention? These seem unrealistically high to me.
That is explicitly true, no? Open Philanthropy was an early OpenAI donor.
What does this have to do with effective altruism?
This could be interesting as a counterpoint to (for example) this essay.
Seems like the Geneva convention falls into this category?
The most good you can do is a Schwartz Set.
Basically I think the idea is that because of inevitable uncertainty, there will be multiple activists/option/donations that may all be considered “the best”, or at least among which it is not possible to draw a comparison.
I think this is true even if all moral outcomes are comparable, but of course if not then it follows that all activities are probably not comparable either.
According to the article, there are high-performing PFAS alternatives, but they are more expensive. So instead Verstergaard allegedly went with the cheaper, lower-performing option.
So, to be clear, it’s not like I have a back-of-the-envelope calculation or anything.
The way I see it, charity is hard mainly because it’s hard to identify opportunities that scale, and even when we do, most of our efforts are wasted. With Deworm The World, for example, only about half of treated children have any worm infection at all. Targeting charitable interventions is usually not cost-effective because the best beneficiaries can be hard to find. This is even harder if we need the reasoning and evidence to be legible.
But, if we are able to identify targeted cases “by accident” (or, in the course of living life), then we get the benefits of targeting for free, without either the cost of finding beneficiaries or the cost of legible/rigorous impact evaluation.
In the rich world, I think this sort of impact usually comes from behaviors that are free or very low cost to the donor. An example is giving CPR in a public place — it could potentially save a life, for a pretty small opportunity cost, but it wouldn’t be worth it to give up your career just to be around in case someone needs CPR. Or a more minor (but also maybe more common) example might be introducing two people who are well positioned to help one another, where the potential connection is discovered incidentally, or by accident.
Does that make sense?
My prior is that there are a lot of cost effective actions in everyday life, even if you don’t live in Uganda, but that it is hard to scale. The circumstances of your life are probably exposing you to more significant scaling opportunities though, even compare to others living in Uganda.
I agree, and to be clear I’m not trying to say that any forum policy change is needed at this time.
Hi Ben,
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
Ian
I guess any report must be considered on its own terms but I’ve been pretty down on this stuff as a category ever since I heard the Center for Strategic and International Studies was cheerleading the idea that there were WMDs in Iraq.
Do you mind if I ask why you decided to run this particular program, in this particular location?
I can try to scare up some sources, but do you mind if I ask if there are particular claims that you are especially interested in?
Hi Charlie, thanks for your reply.
I am a dilettante and don’t have much further to offer on social desirability bias, unfortunately. You might try connecting with a social scientist, development economist, or staff at one of the EA or EA-adjacent global health and development charities operating at the frontier of evidence for their respective interventions, such as GiveWell, GiveDirectly, Living Goods, IDinsight, DMI, Evidence Action, etc.