For those of us (like myself) who, for family reasons or otherwise, are unable to move to a hub or location with a comparative advantage for any type of EA work, there are local chapters in many places. (Even if mine is a 30 minute drive or 45 miute bus ride away.) Those can sometimes benefit from more support, but where it is most needed, and where there is capacity to do so, I understand that it is already happening, or at least starting to. But it doesn’t mean it makes sense to fund EA orgs everywhere, because coordination costs and duplication are real issues. And communities that want EA infrastructure can build their own, and often have done so. On the other hand, if they are so small that they don’t have locals that want to build a community and can support doing so, I don’t think funding them from EA grants makes sense anyways.
Given that, I certainly agree that there are orgs that would benefit from being located in the “EA Diaspora” specifically in the places you listed. But in many cases, they DO have such organizations already, and a large EA community. Not coincidentally, they are also very well connected with the EA hubs, so that I’d guess many grants to those places would have been excluded in the analysis. S There is no lack of EA policy-focused orgs or EA community infrastructure in DC and the surrounding area, given the number of EA-aligned orgs that are working there—notably, Georgetown’s CSET and Johns Hopkins’ CHS. Similarly the NYC EA chapter is among the largest, and not only is it a vibrant community, but is also where GiveDirectly is located. China I’m less familiar with, and is a very different discussion but I don’t see anything stopping people interested in those types of work from moving to those places instead of SF / Oxford to be involved in EA orgs. Otherwise, starting EA orgs that replicate work being done in the hubs seems like a low priority, ineffective activity.
I think my example of corruption reduction captures most of the types of interventions that people have suggested are useful but hard-to quantify, but other examples would be happiness focused work, or pushing for systemic change of various sorts.
Tech risks involving GCRs that are a decade or more away are much more future-focused in the sense that different arguments apply, as I said in the original post.
Agreed—but as the link I included argues, the information we have is swamped by our priors, and isn’t particularly useful for making objective conclusions
Yes—but if it is expected to be very high value, I’d think that they’d be pushing for a new EA charity with it as a focus, as they have done in the past. Most were dropped because the work they did wasn’t as valuable as the top charities.
I think we can drop the Bletchley park discussion. On the present-day stuff, I think they key point is that future-focused interventions have a very different set of questions than present-day non-quantifiable interventions, and you’re plausibly correct that they are underfunded—but I was trying to focus on the present-day non-quantifiable interventions.
Bletchley park as an intervention wasn’t mostly focused on enigma, at least in the first part of the war. It was tremendously effective anyways, as should have been expected. The fact that new and harder codes were being broken was obviously useful as well, and from what I understood, was being encouraged by the leadership alongside the day-to-day codebreaking work.
And re: AI alignment, it WAS being funded. Regarding nanotech risks and geoengineering safety now, it’s been a focus of discussion at CSER and FHI, at least—and there is agreement about the relatively low priority of each compared to other work. (But if someone qualified and aligned with EA goals wanted to work on it more, there’s certainly funding available.)
Agreed on all points!
I’d note that the problem with predicting magnitudes is simply that it’s harder to do than predicting a binary “will it replicate,” though both are obviously valuable.
I agree that this seems like a useful analysis—any chance you have time to read through the grants and write it up?
I haven’t looked at the specific grants, but my understanding was that EA orgs with specific purposes would not usually fund many of the activities that EA grants are used for, since the purpose of the grants is to do something new or different than extant organizations. (Also, organizations usually have organizational and logistical constraints that make expanding to new areas of work inadvisable—look at how badly most mergers go in the corporate world, for instance.
But I agree there are some chicken-and-egg issues. I’m less sure, however, whether geographic diversity is as useful as it normally would be given the advantages of concentrating people in places with significant extant EA infrastructure and networks that enable collaboration.
It seems that there is a critical endogenous factor for location; the people really interested in running EA projects, and who are capable of running them best, gravitate to EA hubs, and have moved there. Many of the most dedicated and capable EAs moved to these hubs and work at these organizations, while the less dedicated/capable ones did not try to do so, or weren’t hired. It’s clear that many of the groups are pulling in EAs from other parts of the world, so the concentration is in fact reflecting this movement. This doesn’t explain the entire bias, and I agree that networks matter for funding and this can be very problematic, but it’s a critical factor.
As I said in the epistemic status, I’m far less certain than I once was, and on the whole I’m now skeptical. As I said in the post and earlier comments, I still think there are places where unquantifiable interventions are very valuable, I just think that unless it’s obvious that they will be (see: Diamond Law of Evaluation,) I’d claim that quantifiably effective interventions are in expectation better.
See my comment above. Bletchley park was exactly the sort of intervention that doesn’t need any pushing. It was funded immediately because of how obvious the benefit was. That″s not retrospective.
If you were to suggest something similar now that were politically feasible and similarly important to a country, I’d be shocked if it wasn’t already happening. Invest in AI and advanced technologies? Check. Invest in Global Health Security? Also check. So the things left to do are less obviously good ideas.
You say that the distribution needs to be “very” fat tailed—implying that we have a decent chance of finding interventions order of mangitude more eefective than bed-nets. I disagree. The very most effective possible interventions, where the cost-benefit ratio is insanely large, are things that we don’t need to run as interventions. For instance, telling people to eat when they have food so they don’t starve would be really impactful if it weren’t unnecessary because of how obviously beneficial it is.
So I don’t think bednets are a massive outlier—they just have a relatively low saturation compared to most comparably effective interventions. The implication of my model is that most really effective interventions are saturated, often very quickly. Even expensive systemic efforts like vaccinations for smallpox got funded fairly rapidly after such universal eradication was possible, and the less used vaccines are either less effective, for less critical diseases, or are more expensive and/or harder to distribute. (And governments and foundations are running those campaigns, successfully, without needing EA pushing or funding.) And that’s why we see few very effective outliers—and since the underlying distribution isn’t fat tailed, even more effective interventions are even rarer, and those that did exist are gone very quickly.
On prediction, I agree that the conclusion is one of epistemic modesty rather than confident claims of non-effectiveness. But the practical implication of that modesty is that for any specific intervention, if we fund it thinking it may be really impactful, we’re incredibly unlikely to be correct.
Also, I’m far more skeptical than you about ‘sophisticated’ estimates. Having taken graduate courses in econometrics, I’ll say that the methods are sometimes really useful, but the assumptions never apply, and unless the system model is really fantastic, the prediction error once accounting for model specification uncertainty is large enough that most such econometric analyses of these sorts of really complex, poorly understood systems like corruption or poverty simply don’t say anything.
I don’t have time to discuss this in the level of detail that is warranted, but you might look into the history of ACUS and how it was funded, successful at doing deliberative decision making, defunded, restarted, etc. https://en.wikipedia.org/wiki/Administrative_Conference_of_the_United_States
Now they are doing work like this—https://www.acus.gov/working-groups
In proportion to the needs...
Again, I don’t think that’s relevant. I can easily ruin systems with a poorly spent $10m regardless of how hard it is to fix them.
I am not sure I understand why international funding should displace local expertise...
You’re saying that these failure modes are avoidable, but I’m not sure they are in fact being avoided.
The building of those health institutions takes a long time, the results come slowly with a time lag of 10+ years.
Yes, and slow feedback is a great recipe for not noticing how badly you’re messing things up. And yes, classic GiveWell type analysis doesn’t work well to consider complex policy systems, which is exactly why they are currently aggressively hiring people with different types of relevant expertise to consider those types of issues.
And speaking of this, here’s an interesting paper Rob Wiblin just shared on complexity and difficulty of decisionmaking in these domains; https://philiptrammell.com/static/simplifying_cluelessness.pdf
Yes, there are plausible tipping points, but I’m not talkin about that. I’m arguing that this isn’t “small amounts of money,” and it is well into the amounts where international funding displaces building local expertise, makes it harder to focus on building health systems generally instead of focusing narrowly, undermines the need for local governments to take responsibility, etc.
I still think these are outweighed by the good, but the impacts are not trivial.
I don’t understand why your argument responds to mine. They don’t need to be big enough to directly solve problems to be large enough to have critical systemic side effects.
Note typo/missing word: “Public talks on non-core topics don’t new members or regular attendees.”
We’re well past the point where unintended systemic effects can be ignored. Givewell has directly moved or directed a half billion dollars, and the impact on major philanthropic giving is a multiple of that. Malaria and schistosomiasis initiatives are significantly impacted by this, and just as the effects cannot be dismissed, neither can the conclusion that these are large scale initiatives, with all the attendant pitfalls.