I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
I’m the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.
Personal site (incl various non-EA-related essays): https://www.benkuhn.net/
Email: ben dot s dot kuhn at the most common email address suffix
Yikes; this is pretty concerning data. Great find!
I’d be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their “realistic calculation” of their cost effectiveness, which assumes 5% annualized attrition. (That’s not an apples to apples comparison, so their estimate isn’t necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I’d be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today’s fiction seems cynical and pessimistic about human nature; the characters frequently don’t seem to have goals related to anything other than their immediate social environment; and they often don’t pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership
This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I’m very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?
(PS: if you’re interested in posting but unsure about content, I’d be excited to help answer any q’s or read a draft! My email is in my profile.)
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that’s not a strong argument against not doing it right now. You can’t start a political party with support from 0.01% of the population!
In general, we should do things that don’t scale but are optimal right now, rather than things that do scale but aren’t optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I would be extremely interested if you were to hypothetically write an “intro to child protection/welfare for EAs” post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
“Cause X” usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I’d expect a cause which the EA community decided was “cause X” to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person’s take.)
While climate change doesn’t immediately appear to be neglected, it seems possible that many people/orgs “working on climate change” aren’t doing so particularly effectively.
Historically, it seems like the environmental movement has an extremely poor track record at applying an “optimizing mindset” to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).
Of course, I have no idea how much this consideration increases the “effective neglectedness” of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka’s). It’s helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.
If you value transparency in EA and want to see more of it (and you’re not a donor to the LTF fund), it seems to me like you should chill out here. That doesn’t mean don’t question the grants, but it does mean you should:
Apply even more principle of charity than usual
Take time to phrase your question in the way that’s easiest to answer
Apply some filter and don’t ask unimportant questions
Use a tone that minimizes stress for the person you’re questioning
Wow! This is an order of magnitude larger than I expected. What’s the source of the overhead here?
This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that “social movements are the [or at least a] key drivers of change in human history.” It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.
More generally, it’s true that your chance of convincing “constitutionally disinclined” people with two papers is low. But your chance is zero of convincing anyone with either (1) a bare assertion that there’s some good stuff there somewhere, or (2) the claim that they will understand you after spending 20 hours reading some very long books.
Also, I think your chance of convincing non-constitutionally-disinclined people with the right two papers is higher than you think. Although you’re correct that two papers directly arguing “you should use paradigm x instead of paradigm y” may not be super helpful, two pointers to “here are some interesting conclusions that you’ll come to if you apply paradigm x” can easily be enough to pique someone’s interest.
I’m very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA’s vetting, through EA Grants and EA Funds).
What % of grant applicants are in the “definitely good enough” vs “definitely (or reasonably confidently) not good enough” vs “uncertain + not enough time/expertise to evaluate” buckets?
(Are these the right buckets to be looking at?)
What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
Do you have any upcoming plans to address them?
Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of “less established” organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.
It seems easier to increase the efficiency of your work than the quality.
In software engineering, I’ve found the exact opposite. It’s relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I’m using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.
(Of course there are many reasons why other types of work might be different from software eng!)
In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.
This is why I think Wave’s two-work-test approach is useful; even if someone “looks good on paper” and makes it through the early filters, it’s often immediately obvious from even a small work sample that they won’t be at the top of the applicant pool, so there’s no need for the larger sample.
Downvoted for not being at least two of true, necessary or kind. If you’re going to be snide, I think you should do a much better job of defending your claims rather than merely gesturing at a vague appeal to “holistic and historically extended nature.”
You’ve left zero pointers to the justifications for your beliefs that could be followed by a good-faith interlocutor in under ~20h of reading. Nor have you made an actual case for why a 20-hour investment is required for someone to even be qualified to dismiss the field (an incredible claim given the number of scholars who are willing to engage with arguments based on far less than 20 hours of background reading).
Your comment could be rewritten mutatis mutandis with “scientology” instead of “social movement studies,” with practically no change the argument structure. I think an argument for why a field is worth looking into should strive for more rigor and fewer vaguely insulting pot-shots.
(EDIT: ps, I’m not the downvoter on your other two responses. Wish they’d explained.)
1. Un-timed work test (e.g. OPP research analyst)
Huh. I’m really surprised that they find this useful. One of the main ways that Wave employees’ productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.
It’s much more understandable to me for the grants to have labor-intensive processes, since they can’t fire bad performers later so the effective commitment they’re making is much higher. (A proposal that takes weeks to write is still a questionable format IMO in terms of information density/ease of evaluation, but I don’t know much about grant-making, so this is weakly held.)
I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!