I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
Ian Turner
Bloomberg: Unacknowledged problems with LLINs are causing a rise in malaria.
My sense is that EA, and especially GiveWell, made some enemies in the early days by shining a light on how badly the philanthropic sector was performing. So you got stuff like Charity Navigator describing Effective Altruism as an “elitist philosophy”. Particularly early on, GiveWell was not shy about criticizing big incumbents like UNICEF or Kiva. This probably helped attract attention, but I doubt it helped make friends.
Maybe this is inevitable — maybe only an outsider movement would be able to accomplish what GiveWell has — but it’s not so surprising that an industry being disrupted has negative feelings about the disrupters.
Needless to say there is a sad truth here, that since most foundations are not accountable to effectiveness they can put their own feelings first.
Thanks for sharing this report, and for all the work that went into this program so far.
Regarding the social desirability bias, and survey problems generally, there may be a few tweaks that would help with the situation.
Social desirability bias in surveys can be significantly reduced by using the “list experiment” technique.
There might we a way to phrase the question so that the social desirability bias goes the other way. For example, instead of asking “did you use the products”, you could ask “do you still have the products?”
If you ask people to keep the packaging after use, then you could ask to see it (and observe if it has been used, or not). This might also help estimate diversion.
Regarding the overlap with ANRiN, have you estimated the prior probability of that happening, given the size of the programs? It makes me wonder if there is a bias in the selection of treatment locations that makes this more likely, and which might also affect results in other ways. For example, maybe both organizations are selecting treatment locations with better transportation infrastructure, in which case the program might prove harder to scale in the future.
Seems to me that the obvious solution here is:
GiveDirectly receives some zakat funds and some unrestricted funds
Zakat funds are used to pay for payments to Muslims
Unrestricted funds are used to pay for overhead and payments to non Muslims.
How do you decide who is Muslim? Probably easiest is a population average, e.g., in Yemen 99.99% are Muslim (according to GD); or you could conduct surveys of targeted populations. Worst case you ask recipients after the fact.
Incidentally this is also the approach used by IRUSA.
Another similar example is when USAID funded GiveDirectly. They are prohibited from spending on alcohol, so GD estimated how much went to alcohol and other donors paid for that portion.
Thanks for posting this. I think this is the kind of practical, actionable analysis that we need.
Regarding this:
Given that there is still no way for model developers to deterministically guarantee a model’s expected behavior to downstream actors, and given the benefits that advanced AI could have in society, we think it is unfair for an actor to be forced to pay damages regardless of any steps they’ve taken to ensure the advanced AI in question is safe.
It seems to me that this is begging the question. If we don’t know how to make AIs safe, that is a reason not to make AIs at all, not a reason to make unsafe AIs. This is not really any different from how the nuclear power industry has been regulated out of existence in some countries[1].
- ↩︎
I think this analogy holds regardless of your opinions about the actual dangerousness of nuclear power.
- ↩︎
I put this video into summarize.tech and this is what it came back with:
In the “How the Wealthy Use “Charity” to Screw Everyone Else with Amy Schiller—Factually! − 238” YouTube video, Amy Schiller discusses concerns with effective altruism, a philosophy of donating based on its potential impact, which grew out of a desire to do the most good and reduce suffering. However, Schiller believes that effective altruism reduces human worth to mere survival and neglects the importance of human flourishing and creating new things. She criticizes its hubris and paternalistic view, which aims to optimize and perfect the outcomes of philanthropy for the donor’s satisfaction. Schiller also questions the lack of focus on climate change and social issues like income inequality and labor rights in certain philanthropic organizations. She calls for a more democratic approach to giving, where more people have the discretionary income to engage in charitable giving, and for philanthropists to use their wealth to influence policy directly. Schiller advocates for partnerships between philanthropy, government, and public institutions to create positive change. LeBron James’ philanthropy, which provides both tangible and intangible benefits to the community, is given as an example of effective and humble philanthropy. The speaker emphasizes the need for government to provide basic needs and philanthropy to provide things that bring joy and connect people to their souls
ChatGPT bug leaked users’ conversation histories
Oh hey, I wrote a blog post (sorta) about this.
The TLDR: Since I was a teenager I’ve been looking for ways to give effectively, and once GiveWell appeared doing so became a whole lot easier.
How did this unusual lifestyle choice affect the way you present yourself to others? Opinion conformity is a common impression management technique, and do-gooder derogation is a well documented phenomenon. Do you think earning-to-give affected your career or relationships, compared to the earning-to-spend hypothetical?
I certainly don’t mean to question others’ on the ground experience, but when I asked people in Uganda and Kenya what programs to fund, water projects were the most common response.
I had assumed that the reason for this was something like, drinking poor quality water is unpleasant or stigmatized beyond the simple health effects, or maybe assumptions about the appropriate role for international funders. But I can’t discount the possibility they know something we don’t.
One of the problems that I observe in this conversation is just the meaning of the word “Aid”. It seems like in some cases this can refer to directly supporting a government’s budget, while in other cases it could refer to a foreign NGO directly administering a program. Should we expect such diverse interventions to have equivalent risk of corruption or institutional effects? To me they seem quite different.
This article seems to presuppose that EA has a worse time with bad behavior than other “less weird” groups. But is that actually true? For example Scott Alexander’s evidence (very limited though it is) seems to point the other way.
The US DOT may technically have official guidance for the value of a statistical life, but I don’t think this actually informs much of the department’s priorities. Most DOT spending is on projects that can be expected to increase the number of roadway deaths due to increased speed and increased miles driven.
The US DOT budget is set by Congress and Congress has allocated most of that budget to programs other than safety (mainly highway expansions). I don’t think DOT is even allowed to enforce its VSL when making grants to states.
The reality is that there are some roadway interventions that are way better than $10MM per life. Protected bike lane networks, for example, save lives for well under $100,000 each. But due to a mix of funding constraints, legislation, jurisdiction, and institutional inertia, DOT is not doing many of those projects.
In trying to understand the expected adoption of new practices such as dietary changes, it would be worth consulting with experts on diffusion of innovations. This field of study is explicitly concerned with the question of how and why people decide to adopt (or not adopt) new technologies or practices. Needless to say it’s a complicated question and the answers are not always obvious.
For a good introduction, read Diffusion of Innovations, by Everett M. Rogers, or for a briefer introduction, read the Wikipedia page.
So, to be clear, it’s not like I have a back-of-the-envelope calculation or anything.
The way I see it, charity is hard mainly because it’s hard to identify opportunities that scale, and even when we do, most of our efforts are wasted. With Deworm The World, for example, only about half of treated children have any worm infection at all. Targeting charitable interventions is usually not cost-effective because the best beneficiaries can be hard to find. This is even harder if we need the reasoning and evidence to be legible.
But, if we are able to identify targeted cases “by accident” (or, in the course of living life), then we get the benefits of targeting for free, without either the cost of finding beneficiaries or the cost of legible/rigorous impact evaluation.
In the rich world, I think this sort of impact usually comes from behaviors that are free or very low cost to the donor. An example is giving CPR in a public place — it could potentially save a life, for a pretty small opportunity cost, but it wouldn’t be worth it to give up your career just to be around in case someone needs CPR. Or a more minor (but also maybe more common) example might be introducing two people who are well positioned to help one another, where the potential connection is discovered incidentally, or by accident.
Does that make sense?
I understand that (unlike most EA or EA-adjacent organizations) AMF is almost entirely run by volunteer labor. Could you talk about your decision to operate this way, and what challenges or benefits you’ve gotten from this approach?
Can you say more about why it’s bad for employees to benefit from the charity? Does this philosophy apply to other procurement, or only labor?
As a donor myself, I care about results and I’m completely fine with a charity paying obscene bonuses if that’s what it takes to get results.
As Dan Pallotta noted in his TED talk, I think there’s a weird double standard in social recognition for charity work, where the person giving 10% of their income gets more accolades than someone making a 20% pay cut to do charitable work, if the resulting pay is still considered “high” by nonprofit sector standards.
Could you say more about what you’ve done to validate the 18 month cutoff you are using? Looking at standard practices seems like a reasonable place to start but may not be the end of the conversation. What if most charities have less reserves because of pressure from funders and not because that is the operationally optimal amount?
GiveWell for example funds programs up to three years in the future. Have you spoken with anyone at GiveWell to understand why SoGive and GiveWell have arrived at such different thresholds?
Thinking hypothetically, it feels plausible to me that there are many programs out there that need more than an 18 month runway to fully implement. For example, GiveDirectly fully funded their basic income program from the start, even though the funds would not be fully distributed for 12 years.
How should we think about the 17% response rate to this survey? Is it possible that researchers who are more concerned about alignment are also more likely to complete the survey?