“Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards—if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen.”
Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it’s unlikely, we could be wrong. So work to make a bigger buffer does have value.
But the question I am concerned with is whether it’s the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.
“The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we’d be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation.”
Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:
Noticing that market prices—a collective judgement of millions of informed people in these industries—seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.
I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.
“it seems far too superficial to justify turning people away from working on the subject if that’s where their skills and interests lie.”
My comments in the piece are that I merely don’t encourage people to work on it, and that it is the best fit for some people’s skills.
“In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they’re available.”
The contrast I intended to draw there is with research into non-resource shortage related GCRs—particularly dangers from new technologies.
“Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn’t to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first.”
It’s true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don’t see a reason for it to be higher overall, in expectation.
But the question I am concerned with is whether it’s the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.
Sure. I mean, we basically agree, except that I feel much lower confidence (and anxiety at the confidence with which non-specialists make these pronunciations). Going into research in general is something that I’ve mostly felt more pessimistic about as an EA approach than 80K are, but if someone already partway down the path to a career based on resource depletion showed promise and passion in it, I’d think it plausible it was optimal for them to continue.
Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:
Noticing that market prices—a collective judgement of millions of informed people in these industries—seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
I would probably trust the market over a single scientist, but I would trust the collective judgement of a field of scientists over the market. I don’t see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.
Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
I’ve seen the same, but my own sense is that the reverse problem—economists having an astonishing lack of understanding of science—is much more acute. Also, I find scientists more scrupulous about the limits of their predictive ability. To give specific examples two of which are by figures close to the EA movement, Stephen Landsburg informing Stephen Hawking that his understanding of physics is ’90% of the way there’, Robin Hanson arguing without a number in sight that ‘Most farm animals prefer living to dying; they do not want to commit suicide’ and therefore that vegetarianism is harmful, and Bjorn Lomborg’s head-on collision with apparently the entire field of climate science in The Skeptical Environmentalist.
Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.
I can’t opine on this, except that I still feel greater epistemic humility is worthwhile. If your conclusions are right, it seems worth trying to get them published in a prominent scientific journal (or if not by you then by an academic who shares your views—and perhaps hasn’t already alienated the journal in question) - even if you don’t manage, one would hope you’d get decent feedback on what they perceived as the flaws in your argument.
It’s true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don’t see a reason for it to be higher overall, in expectation.
Perhaps, but I don’t feel like you’ve acknowledged the problem that technological progress relies on technological progress, such that this could turn out to be a house of cards. As such, it needn’t necessarily be resource depletion that brings it crashing down—any GCR could have the same effect. So work on resource depletion provides some insurance against such a multiply-catastrophic scenario.
I don’t think Rob is aiming this piece at people who are already part way down a research track and have a passion for this area.
Rather, we’ve seen that a large fraction of socially concerned grads (Rob estimates 10% of ppl he’s coached) think this is a pressing issue they should consider going into; before having done any research or commited to a cause. This piece is aimed at them.
That seems pretty reasonable except I take issue with one thing:
“I don’t see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.
…
I’ve seen the same, but my own sense is that the reverse problem—economists having an astonishing lack of understanding of science—is much more acute.”
People generating these market prices are not principally economists. I don’t think economists have any particular wisdom about the natural sciences but that’s not what I’m relying on.
Often people with detailed knowledge of an industry who have a better shot at forecasting e.g. future oil output, are the ones trading. They take a great interest in what scientists and engineers say, if it’s credible, because it can help them make money. Where the prices traders generate are inconsistent with what an outspoken pessimist or optimist says, I downgrade its reliability because they haven’t yet managed to persuade people with money on the line.
An economist need know nothing at all about the details of US politics to know that establishing a liquid prediction market can get them good information about the likely outcome of an election.
By contrast a natural scientist who doesn’t know how businesses respond to resource scarcity is in deep trouble trying to forecast the likely outcome because they lack half the picture.
Now this is no guarantee, because prices often end up being misjudged. But it’s the best thing I can see to go on for the modal scenario.
Consistent with that, if resource prices and futures spike, I will upgrade this cause area a lot.
(reposted from slightly divergent Facebook discussion)
I sometimes wonder if the ‘neglectedness criterion’ isn’t overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?
Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield—and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we’re used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.
Michael Wiebe commented on my first reply:
No altruism needed here; profit-seeking firms will solve this problem.
That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.
I’m thinking of the current example of in vitro meat as a possible analogue—once the technology for that’s cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were—all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we’ve (I think) always had a pretty clear road map to developing.
In my comments on this page, I argue that we already have the technology to sustainably support 10 billion people at the US standard of living. I do want to turn this into a paper, but I have been prioritizing more serious GCRs (I would be interested in finding a collaborator on the resource paper). Of course even if we have the technology, it could be expensive. I also discuss the Limits to Growth books that I think have done the most sophisticated modeling of resource constraints. They predict a crash, but I am skeptical.
I’ll start with the most important first:
Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it’s unlikely, we could be wrong. So work to make a bigger buffer does have value.
But the question I am concerned with is whether it’s the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.
Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:
Noticing that market prices—a collective judgement of millions of informed people in these industries—seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.
I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.
My comments in the piece are that I merely don’t encourage people to work on it, and that it is the best fit for some people’s skills.
The contrast I intended to draw there is with research into non-resource shortage related GCRs—particularly dangers from new technologies.
It’s true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don’t see a reason for it to be higher overall, in expectation.
Sure. I mean, we basically agree, except that I feel much lower confidence (and anxiety at the confidence with which non-specialists make these pronunciations). Going into research in general is something that I’ve mostly felt more pessimistic about as an EA approach than 80K are, but if someone already partway down the path to a career based on resource depletion showed promise and passion in it, I’d think it plausible it was optimal for them to continue.
I would probably trust the market over a single scientist, but I would trust the collective judgement of a field of scientists over the market. I don’t see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.
I’ve seen the same, but my own sense is that the reverse problem—economists having an astonishing lack of understanding of science—is much more acute. Also, I find scientists more scrupulous about the limits of their predictive ability. To give specific examples two of which are by figures close to the EA movement, Stephen Landsburg informing Stephen Hawking that his understanding of physics is ’90% of the way there’, Robin Hanson arguing without a number in sight that ‘Most farm animals prefer living to dying; they do not want to commit suicide’ and therefore that vegetarianism is harmful, and Bjorn Lomborg’s head-on collision with apparently the entire field of climate science in The Skeptical Environmentalist.
I can’t opine on this, except that I still feel greater epistemic humility is worthwhile. If your conclusions are right, it seems worth trying to get them published in a prominent scientific journal (or if not by you then by an academic who shares your views—and perhaps hasn’t already alienated the journal in question) - even if you don’t manage, one would hope you’d get decent feedback on what they perceived as the flaws in your argument.
Perhaps, but I don’t feel like you’ve acknowledged the problem that technological progress relies on technological progress, such that this could turn out to be a house of cards. As such, it needn’t necessarily be resource depletion that brings it crashing down—any GCR could have the same effect. So work on resource depletion provides some insurance against such a multiply-catastrophic scenario.
I don’t think Rob is aiming this piece at people who are already part way down a research track and have a passion for this area.
Rather, we’ve seen that a large fraction of socially concerned grads (Rob estimates 10% of ppl he’s coached) think this is a pressing issue they should consider going into; before having done any research or commited to a cause. This piece is aimed at them.
That seems pretty reasonable except I take issue with one thing:
“I don’t see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out. … I’ve seen the same, but my own sense is that the reverse problem—economists having an astonishing lack of understanding of science—is much more acute.”
People generating these market prices are not principally economists. I don’t think economists have any particular wisdom about the natural sciences but that’s not what I’m relying on.
Often people with detailed knowledge of an industry who have a better shot at forecasting e.g. future oil output, are the ones trading. They take a great interest in what scientists and engineers say, if it’s credible, because it can help them make money. Where the prices traders generate are inconsistent with what an outspoken pessimist or optimist says, I downgrade its reliability because they haven’t yet managed to persuade people with money on the line.
An economist need know nothing at all about the details of US politics to know that establishing a liquid prediction market can get them good information about the likely outcome of an election.
By contrast a natural scientist who doesn’t know how businesses respond to resource scarcity is in deep trouble trying to forecast the likely outcome because they lack half the picture.
Now this is no guarantee, because prices often end up being misjudged. But it’s the best thing I can see to go on for the modal scenario.
Consistent with that, if resource prices and futures spike, I will upgrade this cause area a lot.
(reposted from slightly divergent Facebook discussion)
I sometimes wonder if the ‘neglectedness criterion’ isn’t overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?
Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield—and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we’re used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.
Michael Wiebe commented on my first reply:
That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.
I’m thinking of the current example of in vitro meat as a possible analogue—once the technology for that’s cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were—all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we’ve (I think) always had a pretty clear road map to developing.
In my comments on this page, I argue that we already have the technology to sustainably support 10 billion people at the US standard of living. I do want to turn this into a paper, but I have been prioritizing more serious GCRs (I would be interested in finding a collaborator on the resource paper). Of course even if we have the technology, it could be expensive. I also discuss the Limits to Growth books that I think have done the most sophisticated modeling of resource constraints. They predict a crash, but I am skeptical.