[Context: This is a research proposal I wrote two years ago for an application. I’m posing it here because I might want to link to it. I plan to spend a few weeks looking into a subquestion: how heavy-tailed is EA talent, and what does this imply for EA community building?]
Research proposal: Assess claims that “impact is heavy-tailed”
Why is this valuable?
EAs frequently have to decide how much resources to invest to estimate the utility of their available options; e.g.:
How much time to invest to identify the best giving opportunity?
How much research to do before committing to a small set of cause areas?
When deciding whether to hire someone now or a more talented person in the future, when is it worth to wait?
One major input to such questions is how heavy-tailed the distribution of altruistic impact is: The better the best options are relative to a random option, the more valuable it is to identify the best options.
Claims like “impact is heavy-tailed” are widely accepted in the EA community—with major strategic consequences (e.g. [1], “Talent is high variance”)—but have sometimes been questioned [2, 3, 4, 5].
These claims are often made in an imprecise way, which makes it hard to estimate the extent of their practical implications (should you spend a month or a year doing research before deciding?), and hard to check if one actually disagrees about them. E.g., is the claim that we can now see that Einstein did much more for progress in physics than 90% of the world population at his time, or that in 1900 our subjective expected value for the progress Einstein would make would have been much higher than the value for a random physics graduate student, or something in between?
Suggested approach
1.Collect several claims of this type that have been made. 2. Review statistical measures of heavy-tailedness. 3. Limit the project’s scope appropriately. E.g., focus just on the claim that “talent is heavy-tailed” and its implications for community building. 4. Refine claims into precise candidate versions, i.e. something like “looking backwards, the empirical distribution of the number of published papers by researcher looks like it was sampled from a distribution that doesn’t have finite variance” rather than “researcher talent is heavy-tailed”. 5. Assess the veracity of those claims, based on published arguments about them and general properties of heavy-tailed distributions (e.g. [6]). Perhaps gather additional data. 6. Write up the results in an accessible way that highlights the true, precise claims and their practical implications.
Concerns
There probably are good reasons for why “impact is heavy-tailed” is widely accepted. I’m therefore unlikely to produce actionable results.
The proposed level of analysis may be too general.
This could be relevant. It’s not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.
[Context: This is a research proposal I wrote two years ago for an application. I’m posing it here because I might want to link to it. I plan to spend a few weeks looking into a subquestion: how heavy-tailed is EA talent, and what does this imply for EA community building?]
Research proposal: Assess claims that “impact is heavy-tailed”
Why is this valuable?
EAs frequently have to decide how much resources to invest to estimate the utility of their available options; e.g.:
How much time to invest to identify the best giving opportunity?
How much research to do before committing to a small set of cause areas?
When deciding whether to hire someone now or a more talented person in the future, when is it worth to wait?
One major input to such questions is how heavy-tailed the distribution of altruistic impact is: The better the best options are relative to a random option, the more valuable it is to identify the best options.
Claims like “impact is heavy-tailed” are widely accepted in the EA community—with major strategic consequences (e.g. [1], “Talent is high variance”)—but have sometimes been questioned [2, 3, 4, 5].
These claims are often made in an imprecise way, which makes it hard to estimate the extent of their practical implications (should you spend a month or a year doing research before deciding?), and hard to check if one actually disagrees about them. E.g., is the claim that we can now see that Einstein did much more for progress in physics than 90% of the world population at his time, or that in 1900 our subjective expected value for the progress Einstein would make would have been much higher than the value for a random physics graduate student, or something in between?
Suggested approach
1. Collect several claims of this type that have been made.
2. Review statistical measures of heavy-tailedness.
3. Limit the project’s scope appropriately. E.g., focus just on the claim that “talent is heavy-tailed” and its implications for community building.
4. Refine claims into precise candidate versions, i.e. something like “looking backwards, the empirical distribution of the number of published papers by researcher looks like it was sampled from a distribution that doesn’t have finite variance” rather than “researcher talent is heavy-tailed”.
5. Assess the veracity of those claims, based on published arguments about them and general properties of heavy-tailed distributions (e.g. [6]). Perhaps gather additional data.
6. Write up the results in an accessible way that highlights the true, precise claims and their practical implications.
Concerns
There probably are good reasons for why “impact is heavy-tailed” is widely accepted. I’m therefore unlikely to produce actionable results.
The proposed level of analysis may be too general.
This could be relevant. It’s not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.