Paying people for what they do works great if most of their potential impact comes from activities you can verify. But if their most effective activities are things they have a hard time explaining to others (yet have intrinsic motivation to do), you could miss out on a lot of impact by requiring them instead to work on what’s verifiable.
Perhaps funders should consider granting motivated altruists multi-year basic income. Now they don’t have to compromise[1] between what’s explainable/verifiable vs what they think is most effective—they now have independence to purely pursue the latter.
Bonus point: People who are much more competent than you at X[2] will probably behave in ways you don’t recognise as more competent. If you could, they wouldn’t be much more competent. Your “deference limit” is the level of competence above which you stop being able to reliable judge the difference between experts.
If good research is heavy-tailed & in a positive selection-regime, then cautiousness actively selects against features with the highest expected value.
Consider how the cost of compromising between optimisation criteria interacts with what part of the impact distribution you’re aiming for. If you’re searching for a project with top p% impact and top p% explainability-to-funders, you can expect only p^2 of projects to fit both criteria—assuming independence.
But I think it’s an open question how & when the distributions correlate. One reason to think they could sometimes be anticorrelated is that the projects with the highest explainability-to-funders are also more likely to receive adequate attention from profit-incentives alone.
If you’re doing conjunctive search over projects/ideas for ones that score above a threshold for multiple criteria, it matters a lot which criteria you prioritise most of your parallel attention on to identify candidates for further serial examination. Try out various examples here & here.
At least for hard-to-measure activities where most of the competence derives from knowing what to do in the first place. I reckon this includes most fields of altruistic work.
Paying people for what they do works great if most of their potential impact comes from activities you can verify. But if their most effective activities are things they have a hard time explaining to others (yet have intrinsic motivation to do), you could miss out on a lot of impact by requiring them instead to work on what’s verifiable.
Perhaps funders should consider granting motivated altruists multi-year basic income. Now they don’t have to compromise[1] between what’s explainable/verifiable vs what they think is most effective—they now have independence to purely pursue the latter.
Bonus point: People who are much more competent than you at X[2] will probably behave in ways you don’t recognise as more competent. If you could, they wouldn’t be much more competent. Your “deference limit” is the level of competence above which you stop being able to reliable judge the difference between experts.
Consider how the cost of compromising between optimisation criteria interacts with what part of the impact distribution you’re aiming for. If you’re searching for a project with top p% impact and top p% explainability-to-funders, you can expect only p^2 of projects to fit both criteria—assuming independence.
But I think it’s an open question how & when the distributions correlate. One reason to think they could sometimes be anticorrelated is that the projects with the highest explainability-to-funders are also more likely to receive adequate attention from profit-incentives alone.
If you’re doing conjunctive search over projects/ideas for ones that score above a threshold for multiple criteria, it matters a lot which criteria you prioritise most of your parallel attention on to identify candidates for further serial examination. Try out various examples here & here.
At least for hard-to-measure activities where most of the competence derives from knowing what to do in the first place. I reckon this includes most fields of altruistic work.