then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that’s also consistent with the elasticity view of neglectedness, isn’t it?)
Can you expand on this? I only know of elasticity from reading around it after’s Rob’s in response to the first draft of this essay, so if there’s some significance to it that isn’t captured in the equations given, I maybe don’t know it. If it’s just a case of relabelling, I don’t see how it would solve the problems with the equations, though—unused variables and divisions by zero seem fundamentally problematic.
But because lots of other people work on climate change, if you hadn’t done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)
But [
this only holds to the extent that the field is proportionally less neglected—a priori you’re less replaceable in an area that’s 1⁄3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.
]
which is just point 6 from the ‘Diminishing returns due to problem prioritisation’ section applied. I think all the preceding points from the section could apply as well—eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it’s one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can’t see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
in which case more people working on a field would indicate that it was more worth working on.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.
Can you expand on this? I only know of elasticity from reading around it after’s Rob’s in response to the first draft of this essay, so if there’s some significance to it that isn’t captured in the equations given, I maybe don’t know it. If it’s just a case of relabelling, I don’t see how it would solve the problems with the equations, though—unused variables and divisions by zero seem fundamentally problematic.
this only holds to the extent that the field is proportionally less neglected—a priori you’re less replaceable in an area that’s 1⁄3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.
which is just point 6 from the ‘Diminishing returns due to problem prioritisation’ section applied. I think all the preceding points from the section could apply as well—eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it’s one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can’t see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.