To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
in which case more people working on a field would indicate that it was more worth working on.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.