I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka’s). It’s helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.
If you value transparency in EA and want to see more of it (and you’re not a donor to the LTF fund), it seems to me like you should chill out here. That doesn’t mean don’t question the grants, but it does mean you should:
Apply even more principle of charity than usual
Take time to phrase your question in the way that’s easiest to answer
Apply some filter and don’t ask unimportant questions
Use a tone that minimizes stress for the person you’re questioning
I’m very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA’s vetting, through EA Grants and EA Funds).
What % of grant applicants are in the “definitely good enough” vs “definitely (or reasonably confidently) not good enough” vs “uncertain + not enough time/expertise to evaluate” buckets?
(Are these the right buckets to be looking at?)
What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
Do you have any upcoming plans to address them?
Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of “less established” organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.
While climate change doesn’t immediately appear to be neglected, it seems possible that many people/orgs “working on climate change” aren’t doing so particularly effectively.
Historically, it seems like the environmental movement has an extremely poor track record at applying an “optimizing mindset” to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).
Of course, I have no idea how much this consideration increases the “effective neglectedness” of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that’s not a strong argument against not doing it right now. You can’t start a political party with support from 0.01% of the population!
In general, we should do things that don’t scale but are optimal right now, rather than things that do scale but aren’t optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I’m sorry to see so many orgs take 10+ hours to get you only partway through the process, let alone multiple 40+ hour processes. This is especially glaring compared to the very low number of orgs that rejected you in under 5 hours.
It sounds like many of these orgs would benefit (both you and themselves!) from improving their evaluations to reject people earlier in the process.
My team at Wave’s current technical interview process is under 10 hours over 4 stages (assuming you spend 1 hour on your cover letter and resume); the majority of rejections happen after less than 5 hours. The non-technical interview process is somewhat longer, but I would guess still not more than 15 hours and with the majority of applications being rejected in under 5 hours (the final interview is a full day).
Notably, we do two work samples, a 2hr one (where most applicants are rejected) and a 4-5hr one for the final interview. If I were interviewing for a non-technical role I’d insert a behavioral interview after the first work sample as well. These shorter interviews help us screen out many candidates before we waste a ton of their time. It’s hard for me to imagine needing 8+ hours for a work sample unless the role is extremely complex and requires many different skills.
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I’d be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today’s fiction seems cynical and pessimistic about human nature; the characters frequently don’t seem to have goals related to anything other than their immediate social environment; and they often don’t pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.
I would be extremely interested if you were to hypothetically write an “intro to child protection/welfare for EAs” post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
“Cause X” usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I’d expect a cause which the EA community decided was “cause X” to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person’s take.)
I agree with the weakest statement that you make in this article, namely that vegetarianism is not a totally obvious conclusion from EA premises, and non-vegetarian EAs should not be shamed or moralized at.
That said, I think your imaginary vegetarian debate partner is making a pretty bad pro-vegetarian case. For instance:
You managed to list six considerations that weigh against vegetarianism in your sketched-out cost-effectiveness analysis, but none that weigh for it. This is surprising, since there are plenty such considerations! Here are examples of a few:
The cost-effectiveness estimate for The Humane League is likely non-robust and biased upwards (so when this bias is accounted for, it costs more to avert the same amount of harm through donation rather than through personal choices)
Even “cage-free eggs” are very unlikely to be suffering-free, since the farmer is optimizing for “ability to put the label ‘cage-free’ on the eggs” and not actually reducing the suffering of the chickens
Each consumed animal is responsible for some flow-through deaths on expectation, due to e.g. animal deaths while their feed is being harvested, which weighs against the conversion to human QALYs or cage-free egg subsidies
The inconvenience-cost-per-meal of vegetarianism falls dramatically as you get used to being vegetarianism
Even if logically inconvenience-from-donating and inconvenience-from-vegetarianism should be perfectly fungible, this is likely untrue/psychologically impractical in practice
Torture on factory farms may be worse than a painless death, so averting one factory-farmed year might save more than one QALY
“Vegetables harm animals too” is likely not actually a consideration that weighs against, because the farmed animals are fed vegetables as well (see above consideration).
You claim that eating cheaply and eating vegetarianism are in tension. This seems extremely unlikely to be true to me. Meat is expensive.
More broadly, the case that eating fewer animals causes fewer animals to be harmed is very robust and common-sense, whereas the case for instead offsetting the harm with increased donations seems rather fragile and uncertain.
Again, I agree with the suggestion that there are parameters someone could stick into your calculations for their personal estimates of the moral value of small animals, personal ability to funge different types of inconveniences, personal trust in multiplying-lots-of-numbers versus robust common-sense arguments, etc. But I think this parameter space is smaller than you’re giving it credit for.
Wow! This is an order of magnitude larger than I expected. What’s the source of the overhead here?
Defenders of EA chide critics for not setting up organizations to evaluate potential systemic changes and for their vague critiques of capitalism. They ignore the entire academic discipline of Social Movement Studies, which focuses on the processes and dynamics of large-scale social change as well as vast quantities of analysis by social movements themselves. The failure within EA to even acknowledge the existence of this evidence, let alone engage with it, suggests status-quo bias.
I have never heard of this field, as have (I suspect) many of your readers. Because of this, if you aim to persuade EAs I think you would do well by following Noah Smith’s “Two Paper Rule” here. Can you recommend some papers that are good exemplars of the “vast quantities of analysis” here?
If you want me to read the vast literature, cite me two papers that are exemplars and paragons of that literature. Foundational papers, key recent innovations—whatever you like (but no review papers or summaries). Just two. I will read them.
Yikes; this is pretty concerning data. Great find!
I’d be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their “realistic calculation” of their cost effectiveness, which assumes 5% annualized attrition. (That’s not an apples to apples comparison, so their estimate isn’t necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)
This is a pretty dramatic tone given the level of evidence backing up the claims:
The post never actually figures out a plausible number for a conversion rate that could be compared to anyone else’s. It’s not actually at all clear to me that our conversion rates are unusually low. Conversion rates for everything are low.
This goes double if you’re asking for relatively extreme actions like EA pitches do, and counting fairly tiny exposures like watching a YouTube video in the denominator. It seems pretty unreasonable to expect a high conversion rate from that.
The article doesn’t present any evidence (just hypothetical anecdotes) that non-conversions become unlikely to convert in the future. What’s the magnitude of this effect? How much of it is actually causal (vs. that person just having a very low likelihood of converting in the first place)?
For instance, in a world where the quality of the message doesn’t matter at all, but half the population has a conversion rate of 1 and the other half has a rate of 0, you’ll see this effect (because the people who don’t convert the first time will have a conversion rate of 0 as well) but optimizing your message doesn’t matter at all.
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
1. Un-timed work test (e.g. OPP research analyst)
Huh. I’m really surprised that they find this useful. One of the main ways that Wave employees’ productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.
I think that if the Standard EA Recommendation for middle- to low-income people is “come back when you make more money”, no middle- to low-income people (to a first approximation) will ever become interested in EA.
I think if I made 30k a year and asked someone what EA-related things I could do and they told me “you don’t make enough to worry about donating, try to optimize your income some more and then we’ll talk,” my reaction would be “Ack! I don’t want to upend my entire life! I just want to help some people! These guys are mean.” And then I would stop paying attention to effective altruism.
My general heuristic for stuff like this is that it’s more important for general recommendations to look reasonable than for them to be optimal (within reason). This is because by the time someone is wondering whether your policy is actually optimal, they care enough to be thinking like an effective altruist already, and are less likely to be scared off by a wrong answer than someone who’s evaluating the surface-reasonableness.