(reposted from slightly divergent Facebook discussion)
I sometimes wonder if the ‘neglectedness criterion’ isn’t overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?
Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield—and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we’re used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.
Michael Wiebe commented on my first reply:
No altruism needed here; profit-seeking firms will solve this problem.
That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.
I’m thinking of the current example of in vitro meat as a possible analogue—once the technology for that’s cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were—all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we’ve (I think) always had a pretty clear road map to developing.
(reposted from slightly divergent Facebook discussion)
I sometimes wonder if the ‘neglectedness criterion’ isn’t overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?
Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield—and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we’re used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.
Michael Wiebe commented on my first reply:
That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.
I’m thinking of the current example of in vitro meat as a possible analogue—once the technology for that’s cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were—all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we’ve (I think) always had a pretty clear road map to developing.