worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership
This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I’m very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?
(PS: if you’re interested in posting but unsure about content, I’d be excited to help answer any q’s or read a draft! My email is in my profile.)
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that’s not a strong argument against not doing it right now. You can’t start a political party with support from 0.01% of the population!
In general, we should do things that don’t scale but are optimal right now, rather than things that do scale but aren’t optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I would be extremely interested if you were to hypothetically write an “intro to child protection/welfare for EAs” post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
“Cause X” usually refers to an issue that is (one of) the most important one(s) to work on, but has been either missed or deprioritized for bad reasons by the effective altruism community (it may come from this talk). So I’d expect a cause which the EA community decided was “cause X” to receive an influx of interest in donations and direct work from the EA community, like how GiveWell directed hundreds of millions of dollars to their top charities, or how a good number of EAs went to work at nonprofits working on animal welfare. (For a potentially negative take on being Cause X, see this biorisk person’s take.)
While climate change doesn’t immediately appear to be neglected, it seems possible that many people/orgs “working on climate change” aren’t doing so particularly effectively.
Historically, it seems like the environmental movement has an extremely poor track record at applying an “optimizing mindset” to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest problem (agriculture).
Of course, I have no idea how much this consideration increases the “effective neglectedness” of climate change. I expect that there are still enough people applying an optimizing mindset to make it reasonably non-neglected, but maybe only par with global health rather than massively less neglected like you might guess from news coverage?
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka’s). It’s helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.
If you value transparency in EA and want to see more of it (and you’re not a donor to the LTF fund), it seems to me like you should chill out here. That doesn’t mean don’t question the grants, but it does mean you should:
Apply even more principle of charity than usual
Take time to phrase your question in the way that’s easiest to answer
Apply some filter and don’t ask unimportant questions
Use a tone that minimizes stress for the person you’re questioning
Wow! This is an order of magnitude larger than I expected. What’s the source of the overhead here?
This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that “social movements are the [or at least a] key drivers of change in human history.” It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.
More generally, it’s true that your chance of convincing “constitutionally disinclined” people with two papers is low. But your chance is zero of convincing anyone with either (1) a bare assertion that there’s some good stuff there somewhere, or (2) the claim that they will understand you after spending 20 hours reading some very long books.
Also, I think your chance of convincing non-constitutionally-disinclined people with the right two papers is higher than you think. Although you’re correct that two papers directly arguing “you should use paradigm x instead of paradigm y” may not be super helpful, two pointers to “here are some interesting conclusions that you’ll come to if you apply paradigm x” can easily be enough to pique someone’s interest.
I’m very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA’s vetting, through EA Grants and EA Funds).
What % of grant applicants are in the “definitely good enough” vs “definitely (or reasonably confidently) not good enough” vs “uncertain + not enough time/expertise to evaluate” buckets?
(Are these the right buckets to be looking at?)
What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
Do you have any upcoming plans to address them?
Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of “less established” organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.
It seems easier to increase the efficiency of your work than the quality.
In software engineering, I’ve found the exact opposite. It’s relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I’m using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.
(Of course there are many reasons why other types of work might be different from software eng!)
In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.
This is why I think Wave’s two-work-test approach is useful; even if someone “looks good on paper” and makes it through the early filters, it’s often immediately obvious from even a small work sample that they won’t be at the top of the applicant pool, so there’s no need for the larger sample.
Downvoted for not being at least two of true, necessary or kind. If you’re going to be snide, I think you should do a much better job of defending your claims rather than merely gesturing at a vague appeal to “holistic and historically extended nature.”
You’ve left zero pointers to the justifications for your beliefs that could be followed by a good-faith interlocutor in under ~20h of reading. Nor have you made an actual case for why a 20-hour investment is required for someone to even be qualified to dismiss the field (an incredible claim given the number of scholars who are willing to engage with arguments based on far less than 20 hours of background reading).
Your comment could be rewritten mutatis mutandis with “scientology” instead of “social movement studies,” with practically no change the argument structure. I think an argument for why a field is worth looking into should strive for more rigor and fewer vaguely insulting pot-shots.
(EDIT: ps, I’m not the downvoter on your other two responses. Wish they’d explained.)
1. Un-timed work test (e.g. OPP research analyst)
Huh. I’m really surprised that they find this useful. One of the main ways that Wave employees’ productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.
It’s much more understandable to me for the grants to have labor-intensive processes, since they can’t fire bad performers later so the effective commitment they’re making is much higher. (A proposal that takes weeks to write is still a questionable format IMO in terms of information density/ease of evaluation, but I don’t know much about grant-making, so this is weakly held.)
I’m sorry to see so many orgs take 10+ hours to get you only partway through the process, let alone multiple 40+ hour processes. This is especially glaring compared to the very low number of orgs that rejected you in under 5 hours.
It sounds like many of these orgs would benefit (both you and themselves!) from improving their evaluations to reject people earlier in the process.
My team at Wave’s current technical interview process is under 10 hours over 4 stages (assuming you spend 1 hour on your cover letter and resume); the majority of rejections happen after less than 5 hours. The non-technical interview process is somewhat longer, but I would guess still not more than 15 hours and with the majority of applications being rejected in under 5 hours (the final interview is a full day).
Notably, we do two work samples, a 2hr one (where most applicants are rejected) and a 4-5hr one for the final interview. If I were interviewing for a non-technical role I’d insert a behavioral interview after the first work sample as well. These shorter interviews help us screen out many candidates before we waste a ton of their time. It’s hard for me to imagine needing 8+ hours for a work sample unless the role is extremely complex and requires many different skills.
Wow, thanks for the great in depth reply!
now weight purchasing fuzzies much more highly than I used to.
Do you mean charitable fuzzies specifically? What kinds of fuzzies do you purchase more of? Do you think this generalizes to more EAs?
What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
Once upon a time, I read a Douglas Hofstadter book that convinced me that the answer was “nothing” (basically because determinism works at the level of basic physics, and morality / your perception of having free will operates about a gazillion levels of abstraction higher, such that applying the model “deterministic” to your own behavior is kind of like saying that no person is more than 7 years old because that’s the point where all the cells in their body get replaced).
I was in high school at the time so I don’t know if it would have the same effect on me, or you, today though.
This was very interesting food for thought, thanks!
Taking systemic change seriously would require EA to embrace a much wider range of methods and forms of evidence, embracing the inevitably uncertain judgments involved in the holistic interpretation of social systems and analysis of the dynamics of social change.
This is definitely correct, but I’d guess that where I (and many EAs) part ways with you is not in being in principle unwilling to make commitments to other methods/forms of evidence, but rather, not finding any other existing paradigms compelling or not agreeing on which ones we find compelling.
You can’t separate the question of “should I take systemic change seriously” from the question of “how compelling is the most compelling paradigm for thinking about systemic change”, so I think you would have a stronger chance of convincing EAs to take systemic change convincingly by arguing why EAs should find a specific paradigm compelling.
Here are some features that might make a paradigm compelling to me. I think the current EA paradigm for addressing global poverty exhibits all of them, but it seems to me that one or more is lacking from (my stereotype of) any current paradigm for addressing systemic change:
Tolerance of uncertainty and ability to course-correct
Compatibility with our understanding of human behavior (e.g. the tendency of people to follow local incentives)
Scope sensitivity (i.e. trying to reason about the relative sizes of different things)
Grounding in consequentialism
Not having its internal discourse co-opted by status seeking or “mood affiliation”