Examples (mostly from Senegal since that’s where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):
Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
In the US, we have fully transparent salaries (everyone at the company can look up anyone else’s salary in a spreadsheet). We weren’t able to extend this norm to our Senegalese subsidiary because it caused too much interpersonal conflict. (This was at least partly the result of us not putting enough investment into making the salary scale work for everyone, but my understanding is that my Senegalese coworkers were pessimistic about bringing back salary transparency even if we fixed that.)
In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I’ve had a few colleagues who I would ask yes-or-no questions and they would answer “Yes” followed by an explanation of why the answer is no.)
Exporting different norms is quite hard at scale. You need to hire people who are the closest to the norms that you want, but they’ll still probably be fare away so you’ll also have to invest a lot in propagating the norms you want, which only really works well 1-on-1. When we needed to scale our local Senegal team quickly we ended up having to compromise on some norms to do so (e.g. salary transparency, amount of paperwork).
Broadly agree, but:
You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don’t even have access to basic needs.
Can’t you just provide people basic needs then though? Many of Wave’s clients have no smartphone and can’t read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously didn’t have smartphones. Providing people cell service is hard (if you’re not a telecom), but if an area has cell service but no internet you can still make useful information products with USSD, SMS, etc., or physical shops.
(I do think that many good startup ideas in the developing world involve providing relatively “basic” needs! But it seems to me like there’s decent opportunity there.)
Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!
Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I’m not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one). The same general idea applies to need for talent: there are a relatively small number of tasks that stand out as unusually in need of more talent.
The “one charity” argument is only true on the margin. It would be incorrect to conclude from this that nobody should start additional charities—for instance, even though GiveWell’s current highest-priority gap is AMF, I’m still glad that Malaria Consortium exists so that it could absorb $25m from them earlier this year. Similarly, it’s incorrect to conclude from this style of argument that the social returns to talent should be concentrated in specific fields. While there may be a small number of “most important tasks” on the margin, the EA community is now big enough that we might expect to see margins changing over time.
Also, the majority of people who are earning to give would probably be able to fund less than one person doing direct work. If your direct work would be mostly non-replaceable, then this compares unfavorably to direct work. (Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.)
If you’re really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?
I haven’t actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.
Whoops, sorry about the quotes—I was writing quickly and intended them to denote that I was using “solve” in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.
These theoretical claims seem quite weak/incomplete.
In practice, autocrats’ time horizons are highly finite, so I don’t think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy’s interests. You haven’t made any case that the important instances of these problems are in an oligarchy’s interests to solve, and it doesn’t seem likely to me.
What’s the shift you think it would imply in animal advocacy?
I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!
Yikes; this is pretty concerning data. Great find!
I’d be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their “realistic calculation” of their cost effectiveness, which assumes 5% annualized attrition. (That’s not an apples to apples comparison, so their estimate isn’t necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I’d be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today’s fiction seems cynical and pessimistic about human nature; the characters frequently don’t seem to have goals related to anything other than their immediate social environment; and they often don’t pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership
This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I’m very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?
(PS: if you’re interested in posting but unsure about content, I’d be excited to help answer any q’s or read a draft! My email is in my profile.)
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that’s not a strong argument against not doing it right now. You can’t start a political party with support from 0.01% of the population!
In general, we should do things that don’t scale but are optimal right now, rather than things that do scale but aren’t optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I would be extremely interested if you were to hypothetically write an “intro to child protection/welfare for EAs” post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
“Cause X” usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I’d expect a cause which the EA community decided was “cause X” to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person’s take.)
While climate change doesn’t immediately appear to be neglected, it seems possible that many people/orgs “working on climate change” aren’t doing so particularly effectively.
Historically, it seems like the environmental movement has an extremely poor track record at applying an “optimizing mindset” to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).
Of course, I have no idea how much this consideration increases the “effective neglectedness” of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka’s). It’s helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.
If you value transparency in EA and want to see more of it (and you’re not a donor to the LTF fund), it seems to me like you should chill out here. That doesn’t mean don’t question the grants, but it does mean you should:
Apply even more principle of charity than usual
Take time to phrase your question in the way that’s easiest to answer
Apply some filter and don’t ask unimportant questions
Use a tone that minimizes stress for the person you’re questioning