A related confusion to me is why there is EA comparative advantage in policy/research, like naively you’d expect external policy groups, consultancies, and academia to do a fine job of it. Yet in practice I think many EA orgs have paid academics to investigate questions of interest to EAs, and while there’s a lot of interesting work, the hit rate is lower than what we might naively have expected (moderate confidence, lukeprog and others can correct me on whether this gestalt view is correct).
So maybe this is a useful reference class to consider.
I don’t think EAs have a comparative advantage in policy/research in general, but I do think some EAs have a comparative advantage in doing some specific kinds of policy/research for other EAs, since EAs care more than many (not all) clients about certain analytic features, e.g. scope-sensitivity, focus on counterfactual impact, probability calibration, reasoning transparency of a particular sort, a tolerance for certain kinds of weirdness, etc.
A related confusion to me is why there is EA comparative advantage in policy/research, like naively you’d expect external policy groups, consultancies, and academia to do a fine job of it. Yet in practice I think many EA orgs have paid academics to investigate questions of interest to EAs, and while there’s a lot of interesting work, the hit rate is lower than what we might naively have expected (moderate confidence, lukeprog and others can correct me on whether this gestalt view is correct).
So maybe this is a useful reference class to consider.
I don’t think EAs have a comparative advantage in policy/research in general, but I do think some EAs have a comparative advantage in doing some specific kinds of policy/research for other EAs, since EAs care more than many (not all) clients about certain analytic features, e.g. scope-sensitivity, focus on counterfactual impact, probability calibration, reasoning transparency of a particular sort, a tolerance for certain kinds of weirdness, etc.