I haven’t reviewed this in depth, but this strikes me as exactly the right type of thing that EAs should now be funding in the space of international development. There is much more potential in the meta- space that hasn’t been explored, and this example being supported, and Hauke’s experience, seem promising in that direction.
For example, other meta ideas I’d expect to be highly valuable:
Working to get IPA/J-PAL to mandate pre-registration and published results for their studies
Quantifying and breaking down ODA to the extent possible across organizations, sectors, geographies, etc.
Advocating for more CEA within IGOs, better ODA spends
Supporting builds and integrations of tools/techniques that improve research (e.g. falsification detection)
Advocacy for more studies in key interventions (deworming, others that are promising but haven’t been sufficiently studied), currently somewhat addressed by GiveWell’s IDinsight partnership.
Increasing the quality-basis of key GW assumptions (external validity of various studies that are highly relied upon)
Creating a fully comprehensive charity dataset focused on gathering key indicators of cost-effectiveness for every 501(c)3
I appreciate this. A lot of smart ideas.
I know this isn’t meant to be universal, but just a note that for me, eating out is one of the best activities on the fun-per-dollar scale.
Why should this be considered tractable? Why should we think your approach is specifically tractable?
I find this visualization to be likely deceptive. The ‘cost of violence’ most often includes a many types of violence (domestic, community, crime, etc.) that are unaddressed by ‘peacekeeping’ interventions. Is my read, that your visualization is comparing a huge category with a specific part of its spend, correct?
Why should we focus on peacekeeping, the effect of which is very difficult to measure, instead of scaling or improving interventions on community violence, some of which already show significant promise in cost-effectiveness?
This looks great! Looking forward to doing a more detailed read when I have more time, but I already see some resources and techniques I wasn’t aware of or have failed to fully implement thus far, so this will serve as added motivation and a nice reference.
I find that the archive of Data Is Plural is a great source for data on a wide variety of topics: http://bit.ly/2h3bNzQ
What was the communication of this like? As someone who I believe has monitored CS pretty closely, I can’t remember a time a salary approaching $50k was communicated.
I see you didn’t call Charity Science ‘talent constrained’ but rather ‘talent limited’. Was this intentional? Because it does seem Charity Science is an org that would get much better access to talent with more funds for salaries.… and that that is a likely major factor in your talent shortage.
(x-post from FB, so phrasing is written more directly as a comment to Scott)
I think this is mostly spot on. There’s one or two additional things I might have included based on my experience (would probably emphasize warm introductions more and mention the value in getting on their radar early).
Also just noting that I think the email could have been improved upon, but I’m interested in whether you share this belief. Top suggestion would have been to have one of the key attention-grabbing names in the subject line of the email, and to prioritize brevity a bit more.
I’m glad you wrote this… I do get questions in this vein a lot and expect it to be a helpful resource for many.
I can’t imagine why. Even all 3 together are shorter than many posts on here. And they really don’t have much standalone value IMHO (i.e. the first).
I don’t think there was any reason for this to be split into 3 posts? It’d be better to condense it into one.
The vast majority of large institutional spending is somewhat static. When there have been major shifts, it is usually in response to the combination of highly successful marketing campaigns and new events.
Malaria has been largely ongoing, without much newsworthiness (to regular media outlets) or specific press. It’s funding therefore is likely to have stayed at a somewhat static level in most organizations.
In contrast, HIV/AIDS was emergent in previous decades. It went from nothing to being highly prominent in a short time period. Relatively large budgets were allocated against it because:
It showed a pattern of significant growth, and there was significant fear that not containing it could lead to runaway growth.
It emerged from 0 cases to being prominent, which was highly newsworthy.
There was a strong coordinated marketing campaign to get governments and IGOs to strongly address it.
HIV/AIDS funding came at a relatively high level as a result, and because funding is largely static and the problem remains, it has stayed that way.
I only skimmed this, but I think the majority of EAs don’t actually look into the how and why of GiveWell’s recommendations. And even less go into the processes and publications that lead to the numbers that GiveWell eventually uses. An indirect result is that GiveWell doesn’t get as much feedback as it could likely benefit from, and too many EAs can’t speak to M&E professionals in international development at a meaningful level.
What’s explained here, and alluded to here, as well as the criticisms, is important basic info for many EAs who are unfamiliar with it. The various methodologies for costing and discounting (both included here and others), in particular, are definitely worth investigating further for those who haven’t.
I haven’t looked there yet, so I’m flagging that my comment was not considering the full context.
(I think that the end links didn’t come up on mobile for me, but it could also have been an oversight on my part that there was supporting documentation, specifically labelled methodology.)
I think it’s quite misleading to present p-values and claim that results are or aren’t ‘statistically significant’, without also presenting the disclaimer that this is very far from a random sample, and therefore these statistical results should be interpreted with significant skepticism.
What’s the range for amounts of money that are most appropriate for you to manage?
EDIT: It’s been over a week, and it seems particularly important that CEA answer this.
I see some significant disadvantages to this, to the point that it should be reconsidered.
EffectiveAltruism.org is designed around making EA welcoming and appealing to newcomers. The EA Forum is quite the opposite… it is in depth, can involve controversial ideas and discussions, and can sometimes have a less welcoming tone in the content and comments.
They’re really polar opposites in terms of EA, and by bringing the two together in the same domain and with the same front-end you’re closely associating them. This violates Marketing 101, bringing two things together that are positioned so differently.
By sharing the same domain, they two will be closely associated in search, and by changing the front-end the association will be much stronger.
Is the intention for the forum to have more newcomers on it? I fear it will become like the Effective Altruism Facebook page in depth of content and usefulness.
Or alternatively if the forum content doesn’t change, it will turn off newcomers and detract from the utility of the main EffectiveAltruism.org site.
I’d like to further understand the plan for bringing these quite different things together, and how you might mitigate the dilution of the forum.
Small side note: Forum.effectivealtruism.org has some SEO disadvantages (v. EffectiveAltruism.org/forum), and the way you implement this transition from a technical standpoint will also affect SEO significantly, so I urge you to consult with somebody about proper ways to do so.
To me the biggest thing missing from this is recognition of different incentive structures. The markets are very, very different because in this analysis:
99%+ of investors are selecting for-profits based on their financial return, and 99%+ of for-profits are optimizing for their financial return
<1% of donors are selecting non-profits based on their statistical impact, and <1% of non-profits are optimizing for statistical impact
As part of the <1% of donors, you’re examining a sector that largely is not trying to optimize for your goals, which creates significant differences relevant to nearly all aspects of this discussion.
Can you address the unanswered question in the announcement thread regarding EA Ventures?
Additionally, is the money already raised for this? That was the major shortcoming with the previous iteration.
Have you already raised the funds for this? EA Ventures failed a while back primarily because there was not the money, and those in charge of it found that they had a much more difficult time raising funds than they expected.
“We want to get input from people who have different viewpoints from our staff and can provide us with an outside view.”
This group feels quite far from assembled to provide an ‘outside’ view. Is there a reason that these provide a different perspective? Perhaps you need some less-insider-type people on this if it is to accomplish the goals you foresee?
I strongly support this, especially with regard to the approach described by: “As another example, if you took an objective criterion like “top 10 biggest foundations 1975-2000″ and looked at all the biggest hits over those 25 years and divided it by all the money over those 25 years, would the cost-effectiveness justify all that spending?”
I think the more general, detailed approach first described is most likely to not have sufficiently meaningful data.