Excellent to see some challenge to this framework! I was particularly pleased to see this line: “in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself.” I’ve often thought that 80k communicates about this oddly—after all, for all we know, maybe there’s room for $10 billion to be spent on climate change before returns start diminishing.
However, having looked through this, I’m not sure I’ve been convinced to update much against neglectedness. After all, if you clarify that the % changes in the formula are really meant to be elasticities (which you allude to in the footnotes, and which I agree isn’t clear in the 80k article), then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that’s also consistent with the elasticity view of neglectedness, isn’t it?)
Why I still think I’m in favour of including neglectedness: because it matters for counterfactual impact. I.e. with a crowded area (e.g. climate change), it’s more likely that if you had never gone into that area, someone else would have come along and achieved the same outcomes as you (or found out the same results as you). And this likelihood drops if the area is neglected.
So a claim that might usefully update my views looks something like this hypothetical dialogue:
Climate change has lots of people working on it (bad)
However there are sub-sectors of climate change work that are high impact and neglected (good)
But because lots of other people work on climate change, if you hadn’t done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)
then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that’s also consistent with the elasticity view of neglectedness, isn’t it?)
Can you expand on this? I only know of elasticity from reading around it after’s Rob’s in response to the first draft of this essay, so if there’s some significance to it that isn’t captured in the equations given, I maybe don’t know it. If it’s just a case of relabelling, I don’t see how it would solve the problems with the equations, though—unused variables and divisions by zero seem fundamentally problematic.
But because lots of other people work on climate change, if you hadn’t done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)
But [
this only holds to the extent that the field is proportionally less neglected—a priori you’re less replaceable in an area that’s 1⁄3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.
]
which is just point 6 from the ‘Diminishing returns due to problem prioritisation’ section applied. I think all the preceding points from the section could apply as well—eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it’s one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can’t see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
in which case more people working on a field would indicate that it was more worth working on.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.
Excellent to see some challenge to this framework! I was particularly pleased to see this line: “in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself.” I’ve often thought that 80k communicates about this oddly—after all, for all we know, maybe there’s room for $10 billion to be spent on climate change before returns start diminishing.
However, having looked through this, I’m not sure I’ve been convinced to update much against neglectedness. After all, if you clarify that the % changes in the formula are really meant to be elasticities (which you allude to in the footnotes, and which I agree isn’t clear in the 80k article), then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that’s also consistent with the elasticity view of neglectedness, isn’t it?)
Why I still think I’m in favour of including neglectedness: because it matters for counterfactual impact. I.e. with a crowded area (e.g. climate change), it’s more likely that if you had never gone into that area, someone else would have come along and achieved the same outcomes as you (or found out the same results as you). And this likelihood drops if the area is neglected.
So a claim that might usefully update my views looks something like this hypothetical dialogue:
Climate change has lots of people working on it (bad)
However there are sub-sectors of climate change work that are high impact and neglected (good)
But because lots of other people work on climate change, if you hadn’t done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)
But [some argument that I haven’t thought of!]
Can you expand on this? I only know of elasticity from reading around it after’s Rob’s in response to the first draft of this essay, so if there’s some significance to it that isn’t captured in the equations given, I maybe don’t know it. If it’s just a case of relabelling, I don’t see how it would solve the problems with the equations, though—unused variables and divisions by zero seem fundamentally problematic.
this only holds to the extent that the field is proportionally less neglected—a priori you’re less replaceable in an area that’s 1⁄3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.
which is just point 6 from the ‘Diminishing returns due to problem prioritisation’ section applied. I think all the preceding points from the section could apply as well—eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.
To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it’s one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can’t see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.
On reflection I don’t think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.
This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I’d still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they’re more rational in one context than the other. A key part of effective altruism’s value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.
I think if you really believe people are rational in the way described, more people working on a field doesn’t necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it—you think the people who are not working on it are also rational, so there must be circumstances under which that’s correct, too.
Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren’t strategically selecting.
To clarify, this only applies if everyone else is picking interventions at random, but you’re still managing to pick the best remaining one (or at least better than chance).
It also seems to me like it applies across causes as well as within causes.