Inadequacy and Modesty
I’m posting my short new book here for discussion: Inadequate Equilibria: Where and How Civilizations Get Stuck. First chapter below, with the rest to follow over the coming days.
This is a book about two incompatible views on the age-old question: “When should I think that I may be able to do something unusually well?”
These two viewpoints tend to give wildly different, nearly cognitively nonoverlapping analyses of questions like:
-
My doctor says I need to eat less and exercise, but a lot of educated-sounding economics bloggers are talking about this thing called the “Shangri-La Diet.” They’re saying that in order to lose weight, all you need to do is consume large quantities of flavorless, high-calorie foods at particular times of day; and they claim some amazing results with this diet. Could they really know better than my doctor? Would I be able to tell if they did?
-
My day job is in artificial intelligence and decision theory. And I recall the dark days before 2015, when there was plenty of effort and attention going into advancing the state of the art in AI capabilities, but almost none going into AI alignment: better understanding AI designs and goals that can safely scale with capabilities. Though interest in the alignment problem has since increased quite a bit, it still makes sense to ask whether at the time I should have inferred from the lack of academic activity that there was no productive work to be done here; since if there were reachable fruits, wouldn’t academics be taking them?
-
Should I try my hand at becoming an entrepreneur? Whether or not it should be difficult to spot promising ideas in a scientific field, it certainly can’t be easy to think up a profitable idea for a new startup. Will I be able to find any good ideas that aren’t already taken?
-
The effective altruism community is a network of philanthropists and researchers that try to find the very best ways to benefit others per dollar, in full generality. Where should effective altruism organizations like GiveWell expect to find low-hanging fruit—neglected interventions ripe with potential? Where should they look to find things that our civilization isn’t already doing about as well as can be done?
When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.
Toward the end of this book, I’ll try to refute an alternative viewpoint that is increasingly popular among some of my friends, one that I think is ill-founded. This viewpoint is the one I’ve previously termed “modesty,” and the message of modesty tends to be: “You can’t expect to be able to do X that isn’t usually done, since you could just be deluding yourself into thinking you’re better than other people.”
I’ll open with a cherry-picked example that I think helps highlight the difference between these two viewpoints.
i.
I once wrote a report, “Intelligence Explosion Microeconomics,” that called for an estimate of the economic growth rate in a fully developed country—that is, a country that is no longer able to improve productivity just by importing well-tested innovations. A footnote of the paper remarked that even though Japan was the country with the most advanced technology—e.g., their cellphones and virtual reality technology were five years ahead of the rest of the world’s—I wasn’t going to use Japan as my estimator for developed economic growth, because, as I saw it, Japan’s monetary policy was utterly deranged.
Roughly, Japan’s central bank wasn’t creating enough money. I won’t go into details here.
A friend of mine, and one of the most careful thinkers I know—let’s call him “John”—made a comment on my draft to this effect:
How do you claim to know this? I can think of plenty of other reasons why Japan could be in a slump: the country’s shrinking and aging population, its low female workplace participation, its high levels of product market regulation, etc. It looks like you’re venturing outside of your area of expertise to no good end.
“How do you claim to know this?” is a very reasonable question here. As John later elaborated, macroeconomics is an area where data sets tend to be thin and predictive performance tends to be poor. And John had previously observed me making contrarian claims where I’d turned out to be badly wrong, like endorsing Gary Taubes’ theories about the causes of the obesity epidemic. More recently, John won money off of me by betting that AI performance on certain metrics would improve faster than I expected; John has a good track record when it comes to spotting my mistakes.
It’s also easy to imagine reasons an observer might have been skeptical. I wasn’t making up my critique of Japan myself; I was reading other economists and deciding that I trusted the ones who were saying that the Bank of Japan was doing it wrong… … Yet one would expect the governing board of the Bank of Japan to be composed of experienced economists with specialized monetary expertise. How likely is it that any outsider would be able to spot an obvious flaw in their policy? How likely is it that someone who isn’t a professional economist (e.g., me) would be able to judge which economic critiques of the Bank of Japan were correct, or which critics were wise?
How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?
Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.
Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?
I’ll refer to this genre of arguments as “modest epistemology.”
In conversation, John clarified to me that he rejects this genre of arguments; but I hear these kinds of arguments fairly often. The head of an effective altruism organization once gave voice to what I would consider a good example of this mode of thinking:
I find it helpful to admit to unpleasant facts that will necessarily be true in the abstract, in order to be more willing to acknowledge them in specific cases. For instance, I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall; to know many people who I regard as better than me according to my values; to regularly make decisions that look silly ex post, and also ex ante; to be mistaken about issues on which there is expert disagreement about half of the time; to perform badly at many things I attempt for the first time; and so on.
The Dunning-Kruger effect shows that unskilled individuals often rate their own skill very highly. Specifically, although there does tend to be a correlation between how competent a person is and how competent they guess they are, this correlation is weaker than one might suppose. In the original study, people in the bottom two quartiles of actual test performance tended to think they did better than about 60% of test-takers, while people in the top two quartiles tended to think they did better than 70% of test-takers.
This suggests that a typical person’s guesses about how they did on a test are evidence, but not particularly powerful evidence: the top quartile is underconfident in how well they did, and the bottom quartiles are highly overconfident.
Given all that, how can we gain much evidence from our belief that we are skilled? Wouldn’t it be more prudent to remind ourselves of the base rate—the prior probability of 50% that we are below average?
Reasoning along similar lines, software developer Hal Finney has endorsed “abandoning personal judgment on most matters in favor of the majority view.” Finney notes that the average person’s opinions would be more accurate (on average) if they simply deferred to the most popular position on as many issues as they could. For this reason:
I choose to adopt the view that in general, on most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment.
[…] I would suggest that although one might not always want to defer to the majority opinion, it should be the default position. Rather than starting with the assumption that one’s own opinion is right, and then looking to see if the majority has good reasons for holding some other view, one should instead start off by following the majority opinion; and then only adopt a different view for good and convincing reasons. On most issues, the default of deferring to the majority will be the best approach. If we accept the principle that “extraordinary claims require extraordinary evidence”, we should demand a high degree of justification for departing from the majority view. The mere fact that our own opinion seems sound would not be enough.1
In this way, Finney hopes to correct for overconfidence and egocentric biases.
Finney’s view is an extreme case, but helps illustrate a pattern that I believe can be found in some more moderate and widely endorsed views. When I speak of “modesty,” I have in mind a fairly diverse set of positions that rest on a similar set of arguments and motivations.
I once heard an Oxford effective altruism proponent crisply summarize what I take to be the central argument for this perspective: “You see that someone says X, which seems wrong, so you conclude their epistemic standards are bad. But they could just see that you say Y, which sounds wrong to them, and conclude your epistemic standards are bad.”2 On this line of thinking, you don’t get any information about who has better epistemic standards merely by observing that someone disagrees with you. After all, the other side observes just the same fact of disagreement.
Applying this argument form to the Bank of Japan example: I receive little or no evidence just from observing that the Bank of Japan says “X” when I believe “not X.” I also can’t be getting strong evidence from any object-level impression I might have that I am unusually competent. So did my priors imply that I and I alone ought to have been born with awesome powers of discernment? (Modest people have posed this exact question to me on more than one occasion.)
It should go without saying that this isn’t how I would explain my own reasoning. But if I reject arguments of the form, “We disagree, therefore I’m right and you’re wrong,” how can I claim to be correct on an economic question where I disagree with an institution as reputable as the Bank of Japan?
The other viewpoint, opposed to modesty—the view that I think is prescribed by normative epistemology (and also by more or less mainstream microeconomics)—requires a somewhat longer introduction.
ii.
By ancient tradition, every explanation of the Efficient Markets Hypothesis must open with the following joke:
Two economists are walking along the street, and one says, “Hey, someone dropped a $20 bill!” and the other says, “Well, it can’t be a real $20 bill because someone would have picked it up already.”
Also by ancient tradition, the next step of the explanation is to remark that while it may make sense to pick up a $20 bill you see on a relatively deserted street, if you think you have spotted a $20 bill lying on the floor of Grand Central Station (the main subway nexus of New York City), and it has stayed there for several hours, then it probably is a fake $20 bill, or it has been glued to the ground.
In real life, when I asked a group of twenty relatively young people how many of them had ever found a $20 bill on the street, five raised their hands, and only one person had found a $20 bill on the street on two separate occasions. So the empirical truth about the joke is that while $20 bills on the street do exist, they’re rare.
On the other hand, the implied policy is that if you do find a $20 bill on the street, you should go ahead and pick it up, because that does happen. It’s not that rare. You certainly shouldn’t start agonizing over whether it’s too arrogant to believe that you have better eyesight than everyone else who has recently walked down the street.
On the other other hand, you should start agonizing about whether to trust your own mental processes if you think you’ve seen a $20 bill stay put for several hours on the floor of Grand Central Station. Especially if your explanation is that nobody else is eager for money.
Is there any other domain such that if we think we see an exploitable possibility, we should sooner doubt our own mental competence than trust the conclusion we reasoned our way to?
If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort.
-
Millions of dollars are offered to smart, conscientious people with physics PhDs to induce them to enter the field.
-
These people are then offered huge additional payouts conditional on actual performance—especially outperformance relative to a baseline.3
-
Large corporations form to specialize in narrow aspects of price-tuning.
-
They have enormous computing clusters, vast historical datasets, and competent machine learning professionals.
-
They receive repeated news of success or failure in a fast feedback loop.4
-
The knowledge aggregation mechanism—namely, prices that equilibrate supply and demand for the financial asset—has proven to work beautifully, and acts to sum up the wisdom of all those highly motivated actors.
-
An actor that spots a 1% systematic error in the aggregate estimate is rewarded with a billion dollars—in a process that also corrects the estimate.
-
Barriers to entry are not zero (you can’t get the loans to make a billion-dollar corrective trade), but there are thousands of diverse intelligent actors who are all individually allowed to spot errors, correct them, and be rewarded, with no central veto.
This is certainly not perfect, but it is literally as good as it gets on modern-day Earth.
I don’t think I can beat the estimates produced by that process. I have no significant help to contribute to it. With study and effort I might become a decent hedge fundie and make a standard return. Theoretically, a liquid market should be just exploitable enough to pay competent professionals the same hourly rate as their next-best opportunity. I could potentially become one of those professionals, and earn standard hedge-fundie returns, but that’s not the same as significantly improving on the market’s efficiency. I’m not sure I expect a huge humanly accessible opportunity of that kind to exist, not in the thickly traded centers of the market. Somebody really would have taken it already! Our civilization cares about whether Microsoft stock will be priced at $37.70 or $37.75 tomorrow afternoon.
I can’t predict a 5% move in Microsoft stock in the next two months, and neither can you. If your uncle tells an anecdote about how he tripled his investment in NetBet.com last year and he attributes this to his skill rather than luck, we know immediately and out of hand that he is wrong. Warren Buffett at the peak of his form couldn’t reliably triple his money every year. If there is a strategy so simple that your uncle can understand it, which has apparently made him money—then we guess that there were just hidden risks built into the strategy, and that in another year or with less favorable events he would have lost half as much as he gained. Any other possibility would be the equivalent of a $20 bill staying on the floor of Grand Central Station for ten years while a horde of physics PhDs searched for it using naked eyes, microscopes, and machine learning.
In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.
The converse side of the efficient-markets perspective would have said this about the Bank of Japan:
Conventional Cynical Economist: So, Eliezer, you think you know better than the Bank of Japan and many other central banks around the world, do you?
Eliezer: Yep. Or rather, by reading econblogs, I believe myself to have identified which econbloggers know better, like Scott Sumner.
C.C.E.: Even though literally trillions of dollars of real value are at stake?
Eliezer: Yep.
C.C.E.: How do you make money off this special knowledge of yours?
Eliezer: I can’t. The market also collectively knows that the Bank of Japan is pursuing a bad monetary policy and has priced Japanese equities accordingly. So even though I know the Bank of Japan’s policy will make Japanese equities perform badly, that fact is already priced in; I can’t expect to make money by short-selling Japanese equities.
C.C.E.: I see. So exactly who is it, on this theory of yours, that is being stupid and passing up a predictable payout?
Eliezer: Nobody, of course! Only the Bank of Japan is allowed to control the trend line of the Japanese money supply, and the Bank of Japan’s governors are not paid any bonuses when the Japanese economy does better. They don’t get a million dollars in personal bonuses if the Japanese economy grows by a trillion dollars.
C.C.E.: So you can’t make any money off knowing better individually, and nobody who has the actual power and authority to fix the problem would gain a personal financial benefit from fixing it? Then we’re done! No anomalies here; this sounds like a perfectly normal state of affairs.
We don’t usually expect to find $20 bills lying on the street, because even though people sometimes drop $20 bills, someone else will usually have a chance to pick up that $20 bill before we do.
We don’t think we can predict 5% price changes in S&P 500 company stock prices over the next month, because we’re competing against dozens of hedge fund managers with enormous supercomputers and physics PhDs, any one of whom could make millions or billions on the pricing error—and in doing so, correct that error.
We can expect it to be hard to come up with a truly good startup idea, and for even the best ideas to involve sweat and risk, because lots of other people are trying to think up good startup ideas. Though in this case we do have the advantage that we can pick our own battles, seek out one good idea that we think hasn’t been done yet.
But the Bank of Japan is just one committee, and it’s not possible for anyone else to step up and make a billion dollars in the course of correcting their error. Even if you think you know exactly what the Bank of Japan is doing wrong, you can’t make a profit on that. At least some hedge-fund managers also know what the Bank of Japan is doing wrong, and the expected consequences are already priced into the market. Nor does this price movement fix the Bank of Japan’s mistaken behavior. So to the extent the Bank of Japan has poor incentives or some other systematic dysfunction, their mistake can persist. As a consequence, when I read some econbloggers who I’d seen being right about empirical predictions before saying that Japan was being grotesquely silly, and the economic logic seemed to me to check out, as best I could follow it, I wasn’t particularly reluctant to believe them. Standard economic theory, generalized beyond the markets to other facets of society, did not seem to me to predict that the Bank of Japan must act wisely for the good of Japan. It would be no surprise if they were competent, but also not much of a surprise if they were incompetent. And knowing this didn’t help me either—I couldn’t exploit the knowledge to make an excess profit myself—and this too wasn’t a coincidence.
This kind of thinking can get quite a bit more complicated than the foregoing paragraphs might suggest. We have to ask why the government of Japan didn’t put pressure on the Bank of Japan (answer: they did, but the Bank of Japan refused), and many other questions. You would need to consider a much larger model of the world, and bring in a lot more background theory, to be confident that you understood the overall situation with the Bank of Japan.
But even without that detailed analysis, in the epistemological background we have a completely different picture from the modest one. We have a picture of the world where it is perfectly plausible for an econblogger to write up a good analysis of what the Bank of Japan is doing wrong, and for a sophisticated reader to reasonably agree that the analysis seems decisive, without a deep agonizing episode of Dunning-Kruger-inspired self-doubt playing any important role in the analysis.
iii.
When we critique a government, we don’t usually get to see what would actually happen if the government took our advice. But in this one case, less than a month after my exchange with John, the Bank of Japan—under the new leadership of Haruhiko Kuroda, and under unprecedented pressure from recently elected Prime Minister Shinzo Abe, who included monetary policy in his campaign platform—embarked on an attempt to print huge amounts of money, with a stated goal of doubling the Japanese money supply.5
Immediately after, Japan experienced real GDP growth of 2.3%, where the previous trend was for falling RGDP. Their economy was operating that far under capacity due to lack of money.6
Now, on the modest view, this was the unfairest test imaginable. Out of all the times that I’ve ever suggested that a government’s policy is suboptimal, the rare time a government tries my preferred alternative will select the most mainstream, highest-conventional-prestige policies I happen to advocate, and those are the very policy proposals that modesty is least likely to disapprove of.
Indeed, if John had looked further into the issue, he would have found (as I found while writing this) that Nobel laureates had also criticized Japan’s monetary policy. He would have found that previous Japanese governments had also hinted to the Bank of Japan that they should print more money. The view from modesty looks at this state of affairs and says, “Hold up! You aren’t so specially blessed as your priors would have you believe; other academics already know what you know! Civilization isn’t so inadequate after all! This is how reasonable dissent from established institutions and experts operates in the real world: via opposition by other mainstream experts and institutions, not via the heroic effort of a lone economics blogger.”
However helpful or unhelpful such remarks may be for guarding against inflated pride, however, they don’t seem to refute (or even address) the central thesis of civilizational inadequacy, as I will define that term later. Roughly, the civilizational inadequacy thesis states that in situations where the central bank of a major developed democracy is carrying out a policy, and a number of highly regarded economists like Ben Bernanke have written papers about what that central bank is doing wrong, and there are widely accepted macroeconomic theories for understanding what that central bank is doing wrong, and the government of the country has tried to put pressure on the central bank to stop doing it wrong, and literally trillions of dollars in real wealth are at stake, then the overall competence of human civilization is such that we shouldn’t be surprised to find the professional economists at the Bank of Japan doing it wrong.
We shouldn’t even be surprised to find that a decision theorist without all that much background in economics can identify which econbloggers have correctly stated what the Bank of Japan is doing wrong, or which simple improvements to their current policies would improve the situation.
iv.
It doesn’t make much difference to my life whether I understand monetary policy better than, say, the European Central Bank, which as of late 2015 was repeating the same textbook mistake as the Bank of Japan and causing trillions of euros of damage to the European economy. Insofar as I have other European friends in countries like Italy, it might be important to them to know that Europe’s economy is probably not going to get any better soon; or the knowledge might be relevant to predicting AI progress timelines to know whether Japan ran out of low-hanging technological fruit or just had bad monetary policy. But that’s a rather distant relevance, and for most of my readers I would expect this issue to be even less relevant to their lives.
But you run into the same implicit background questions of inadequacy analysis when, for example, you’re making health care decisions. Cherry-picking another anecdote: My wife has a severe case of Seasonal Affective Disorder. As of 2014, she’d tried sitting in front of a little lightbox for an hour per day, and it hadn’t worked. SAD’s effects were crippling enough for it to be worth our time to consider extreme options, like her spending time in South America during the winter months. And indeed, vacationing in Chile and receiving more exposure to actual sunlight did work, where lightboxes failed.
From my perspective, the obvious next thought was: “Empirically, dinky little lightboxes don’t work. Empirically, the Sun does work. Next step: more light. Fill our house with more lumens than lightboxes provide.” In short order, I had strung up sixty-five 60W-equivalent LED bulbs in the living room, and another sixty-five in her bedroom.
Ah, but should I assume that my civilization is being opportunistic about seeking out ways to cure SAD, and that if putting up 130 LED light bulbs often worked when lightboxes failed, doctors would already know about that? Should the fact that putting up 130 light bulbs isn’t a well-known next step after lightboxes convince me that my bright idea is probably not a good idea, because if it were, everyone would already be doing it? Should I conclude from my inability to find any published studies on the Internet testing this question that there is some fatal flaw in my plan that I’m just not seeing?
We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it. The underlying question here is: How strongly should I expect that this extremely common medical problem has been thoroughly considered by my civilization, and that there’s nothing new, effective, and unconventional that I can personally improvise?
Eyeballing this question, my off-the-cuff answer—based mostly on the impressions related to me by every friend of mine who has ever dealt with medicine on a research level—is that I wouldn’t necessarily expect any medical researcher ever to have done a formal experiment on the first thought that popped into my mind for treating this extremely common depressive syndrome. Nor would I strongly expect the intervention, if initial tests found it to be effective, to have received enough attention that I could Google it.
But this is just my personal take on the adequacy of 21st-century medical research. Should I be nervous that this line of thinking is just an excuse? Should I fret about the apparently high estimate of my own competence implied by my thinking that I could find an obvious-seeming way to remedy SAD when trained doctors aren’t talking about it and I’m not a medical researcher? Am I going too far outside my own area of expertise and starting to think that I’m good at everything?
In practice, I didn’t bother going through an agonizing fit of self-doubt along those lines. The systematic competence of human civilization with respect to treating mood disorders wasn’t so apparent to me that I considered it a better use of resources to quietly drop the issue than to just lay down the ~$600 needed to test my suspicion. So I went ahead and ran the experiment. And as of early 2017, with two winters come and gone, Brienne seems to no longer have crippling SAD—though it took a lot of light bulbs, including light bulbs in her bedroom that had to be timed to go on at 7:30am before she woke up, to sustain the apparent cure.7
If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives—which is to say, from the standpoint of microeconomics. This is the main topic I’ll cover here.
In the process, I will also make the case that modesty—the part of this process where you go into an agonizing fit of self-doubt—isn’t actually helpful for figuring out when you might outperform some aspect of the equilibrium.
But one should initially present a positive agenda in discussions like these—saying first what you think is the correct epistemology, before inveighing against a position you think is wrong.
So without further ado, in the next chapter I shall present a very simple framework for inadequate equilibria.
Cross-posted to Less Wrong. Next chapter: An Equilibrium of No Free Energy.
-
See Finney, “Philosophical Majoritarianism.” ↩
-
Note: They later said that I’d misunderstood their intent, so take this example with some grains of salt. ↩
-
This is why I specified relative prices: stock-trading professionals are usually graded on how well they do compared to the stock market, not compared to bonds. It’s much less obvious that bonds in general are priced reasonably relative to stocks in general, though this is still being debated by economists. ↩
-
This is why I specified near-term pricing of liquid assets. ↩
-
That is, the Bank of Japan purchased huge numbers of bonds with newly created electronic money. ↩
-
See “How Japan Proved Printing Money Can Be A Great Idea” for a more recent update.
For readers who are wondering, “Wait, how the heck can printing money possibly lead to real goods and services being created?” I suggest Googling “sticky wages” and possibly consulting Scott Sumner’s history of the Great Depression, The Midas Paradox. ↩
-
Specifically, Brienne’s symptoms were mostly cured in the winter of 2015, and partially cured in the winter of 2016, when she spent most of her time under fewer lights. Brienne reports that she suffered a lot less even in the more recent winter, and experienced no suicidal ideation, unlike in years prior to the light therapy.
I’ll be moderately surprised if this treatment works reliably, just because most things don’t where depression is concerned; but I would predict that it works often enough to be worth trying for other people experiencing severe treatment-resistant SAD. ↩
This is not really my area, but how strong is the evidence that Japan’s monetary policy has been terribly misguided?
Japan’s central bank has had to face the uncommon and challenging situation of a declining working age population. This is sufficiently unusual that they can’t learn a lot about how to best to handle it from the historical record.
During Japan’s “lost decade” (1992-2007), GDP per working-age adult grew at 1.4%/yr, which isn’t much slower than the 2.0% rate in the US (the end point of a financial bubble for the US, arguably spurred by excessively low interest rates).
Japan’s unemployment rate has been the envy of developed countries, never rising above 5.6% even during the peak of their recessions: https://tradingeconomics.com/japan/unemployment-rate
We can add to this list of partial successes that while they’ve certainly undershot their inflation target, they haven’t had a repeat of their inflation breakout in the early 70s (https://tradingeconomics.com/japan/inflation-cpi). Lost control of inflation expectations is not a pleasant disease to treat. Their GDP per working hour has simply roughly tracked the OECD average—again, half points for that: https://data.oecd.org/lprdty/gdp-per-hour-worked.htm
While they may well have done better than this with more experimental or heterodox monetary policy, central banks tend to be quite risk averse and slow to change. Historically the downsides of messing up your monetary policy have been very large, so they are inclined to accept the ‘devil they know’ rather than shoot for the ideal policy. Whether being so risk-averse is the right approach just seems unclear to me.
Furthermore, if they changed their targets, methods or processes too frequently, their forward guidance could lose credibility.
I don’t mean to say that Japanese monetary policy was correct ex post or even ex ante, just that if this was chosen as a slam dunk instance of ‘experts’ clearly messing up, it doesn’t seem so strong to me.
Excellent points.
To add to this, we should also bear in mind that GDP growth is a bad metric for comparing countries with low population growth with countries with high population growth.
For one western country with high population growth, I calculated that 20-25% of GDP is devoted just to catering for the population growth. New roads, hospitals, houses, offices, power stations, phone lines, offices, factories etc etc. So for Japan with hardly any population growth, GDP overstates the goods available for consumption versus the US, with high population growth. In effect the US has to produce a lot just to stand still.
As Japan has transitioned to low population growth, its effective ‘consumption-available’ GDP growth has been far higher than it looks.
I hate posting as I worry a lot about saying ill-considered or trivial things. But in the spirit of Eliezer’s post I will have a go.
This post reminds me of some of my experiences, and I really like the $20 note on the floor analogy.
I was a derivatives trader for over 20 years and was last at a large hedge fund. In the early days I was managing new types of currency options at a relatively sleepy British investment bank focusing on servicing clients. After a while I thought some of these options were underpriced by the market due to inadequacy of the models. I wanted to take proprietary positions by buying these options from other banks instead of selling them to clients, but management initially resisted on the lines of why do I think all the other banks are wrong especially some of them are much larger and supposed to be much more sophisticated. But after a year or so I did manage to buy these options and made quite a lot of money which helped to set me on my career in trading.
What I noticed over the years is that these anomalies tend to happen and persist when
There is a new product (so there is less existing expertise to start with)
The demand for the product is growing very quickly (so there is a rapid rise of less price sensitive and less informed participants). This also generates complacency as the product providers are making easy profits and vested interests could build in not disturbing the system.
Extra potency may arise if the product is important enough to affect the market or indeed the society it operates in creating a feedback loop (what George Soros calls reflexivity). The development of credit derivatives and subsequent bust could be a devastating example of this. And perhaps ‘the Big Short’ is a good illustration of Eliezer’s points.
Or you have an existing product but the market in which it operates in is changing rapidly eg. when OPEC failed to hold oil price above $80 in 2014 in face of rapid decline in alternative energy costs.
I wonder if the above observations could be applied more generally according to Eliezer’s ideas. Perhaps there are more opportunities to ‘find real $20 on the floor’ when the above conditions are present. Cryptocurrencies and blockchain for example? And in other areas undergoing rapid changes.
Could you say more about this point? I don’t think I understand it.
My best guess is that it means that when changes to the price of an asset result in changes out in the world, which in turn cause the asset price to change again in the same direction, then the asset price is likely to be wrong, and one can expect a correction. Is that it?
Thanks for the question and the opportunity to clarify (I think I may have inadvertently overemphasised the negative potentials in my post.)
Yes there is a feedback loop, but it doesn’t have to result in a correction.
I think cryptocurrencies and bitcoin could be a good example. You have a new product with a small group of users and uses initially. The user base grows and due to limited increase in supply by design the price rises. As the total value of bitcoin in circulation rises the liquidity or the ability to execute larger transactions also rises, and the number of services accepting the currency rises, and there are more providers providing new ways to access the currency; all these generate more demand which causes the price to rise even further, and so on.. But what was just described is a feedback mechanism, that in itself does not suggest whether a correction should be due or not. Of course at some point a correction could be due if the feedback loop operates too far. I think that’s why Soros said in 2009 “When I see a bubble forming, I rush in to buy” (I think he meant feedback loop when he said ’bubble”).
What I was speculating is whether there are more chances for anti-consensual views to turn out to be correct in a fast evolving system.
This couldn’t have come at a better time. Was just having a conversation with my roommate not two hours ago about buying the iPhone X the first day it comes out, then flying to Argentina (the most expensive country to buy iPhones in the world) and flipping it there.
My argument against it was: “There has to be some reason why this won’t work, if it was that lucrative, everyone would be doing it already.”
But he reminded me that:
Not everyone speaks fluent enough Spanish, like I do, to navigate the trenches of the Argentinian black market.
Not everyone cares enough, or has enough free time. to wake up at 3 AM and go wait in line at an Apple Store to buy the newest piece of technology.
The reason people aren’t doing this is probably that it isn’t profitable once you account for import duties, value added tax and customs clearance fees, as well as the time costs of transacting in the black market. I’m from Argentina and have investigated this in the past for other electronics, so my default assumption is that these reasons generalize to this particular case.
I think this discussion provides a good illustration of the following principle: you should usually be skeptical of your ability to “beat the market” even if you are able to come up with a plausible explanation of the phenomenon in question from which it follows that your circumstances are unique.
Similarly, I think one should generally distrust one’s ability to “beat elite common sense” even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance.
Very rarely, you may be able to do better than the market or the experts, but knowing that this is one of those cases takes much more than saying “I have a story that implies I can do this, and this story looks plausible to me.”
Note that in Eliezer’s example above, he isn’t claiming to have any diagnosis at all of what led the Bank of Japan to reach the wrong conclusion. The premise isn’t “I have good reason to think the Bank of Japan is biased/mistaken in this particular way in this case,” but rather: “It’s unsurprising for institutions like the Bank of Japan to be wrong in easy-to-demonstrate ways, so it doesn’t take a ton of object-level evidence for me to reach a confident conclusion that they’re wrong on the object level, even if I have no idea what particular mistake they’re making, what their reasons are, etc. The Bank of Japan just isn’t the kind of institution that we should strongly expect to be right or wrong on this kind of issue (even though this issue is basic to its institutional function); so moderate amounts of ordinary object-level evidence can be dispositive all on its own.”
From:
If that is the view, I am unsure what the bank of Japan example is meant to motivate.
The example is confounded by the fact that Eliezer reports a lot of outside-view information to make the determination the BoJ is making a bad call. The judgement (and object level argument) which he endorses originally came from econ bloggers (I gather profs like Sumner) who Eliezer endorses due to their good track record. In addition he reports the argument the econ bloggers make does make object level sense.
Yet modest approaches can get the same answer without conceding the object-level evidence is dispositive. If the bank of Japan is debunked as an authority (for whatever reason), then in a dispute of ‘them versus economists with a good empirical track record.’, the outside view favours the latter’s determination for standard reasons (it might caution one should look more widely across economic expertise, but bracket this). It also plausibly allows one to assert confidence in the particular used to make the determination the BoJ makes a bad call.
So I think I’d have made a similar judgement to Eliezer in this case whether or not I had any ‘object level’ evidence to go on: if I didn’t know (or couldn’t understand) the argument Sumner et al. used, I’d still conclude they’re likely right.
It seems one needs to look for cases where ‘outside’ and ‘inside’ diverge. So maybe something like, “Eliezer judged from his personal knowledge of economics the BoJ was making a bad call (without inspiration from any plausible epistemic authority), and was right to back himself ‘over’ the BoJ.”
That would be a case where someone would disagree this is the right approach. If all I had to go on was my argument and knowledge of the BoJs policy (e.g., I couldn’t consult economists or econbloggers or whatever), then I suggest one should think that the incentives of the BoJ are probably at least somewhat better than orthogonal on expectation, and probably better correlated than an argument made by an amateur economist. If it transpired the argument was actually right, modesty’s failure in a single case is not much of a strike against it, at least without some track record beyond this single case..
I never claimed that this is what Eliezer was doing in that particular case, or in other cases. (I’m not even sure I understand Eliezer’s position.) I was responding to the previous comment, and drawing a parallel between “beating the market” in that and other contexts. I’m sorry if this was unclear.
To address your substantive point: If the claim is that we shouldn’t give much weight to the views of individuals and institutions that we shouldn’t expect them to be good at tracking the truth, despite their status or prominence in society, this is something that hardly any rationalist or EA would dispute. Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics, to name a few—that deviate significantly from expert opinion, unless this is conjoined with credible arguments for thinking that warranted skepticism extends to each of those expert communities. To my knowledge, no persuasive arguments of this sort have been provided.
Yeah, I wasn’t saying that you were making a claim about Eliezer; I just wanted to highlight that he’s possibly making a stronger claim even than the one you’re warning against when you say “one should generally distrust one’s ability to ‘beat elite common sense’ even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance”.
I think the main two factual disagreements here might be “how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?” and “for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities’) competency, epistemic rationality, meta-rationality, etc.?” I don’t know whether you in particular would disagree with Eliezer on those claims, though it sounds like you may.
Yeah, agreed. The “adequacy” level of those fields, and the base adequacy level of civilization as a whole, is one of the most important questions here.
Could you say more about what you have in mind by “confident pronouncements [about] AI timelines”? I usually think of Eliezer as very non-confident about timelines.
Thank you, this is extremely clear, and captures the essence of much of what’s going between Eliezer and his critics in this area.
I had in mind forecasts Eliezer made many years ago that didn’t come to pass as well as his most recent bet with Bryan Caplan. But it’s a stretch to call these ‘confident pronouncements’, so I’ve edited my post and removed ‘AI timelines’ from the list of examples.
Going back to your list:
I haven’t looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes’ theories in the past. If anyone has links, I’d be interested to read more.
AFAIK Eliezer hasn’t published why he holds his views about animal consciousness, and I don’t know what he’s thinking there. I don’t have a strong view on whether he’s right (or whether he’s overconfident).
Concerning zombies: I think Eliezer is correct that the zombie argument can’t provide any evidence for the claim that we instantiate mental properties that don’t logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.
I’m happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I’m also happy to talk more about meta questions related to this issue, like, “If the argument above is correct, why hasn’t it convinced all philosophers of mind?” I don’t have super confident views on that question, but there are various obvious possibilities that come to mind.
Concerning QM: I think Eliezer’s correct that Copenhagen-associated views like “objective collapse” and “quantum non-realism” are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham’s razor. I’m happy to talk more about this too; I think the object-level discussions are important here.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to “read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.
To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?
Update (2017-10-28): I just realized that the kind of challenge I’m raising here has been carried out, in the form of a “natural experiment”, for Eliezer’s views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer’s TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).
Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):
This seems correct. I just noticed you could phrase this the other way—why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn’t find Greg’s post particularly persuasive—this isn’t a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).
Maybe because these people have been surprisingly accurate? In addition, it’s not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.
Yeah, that’s the sort of discussion that seems to me most relevant.
I don’t think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.
The informed modest person would go, “You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side”. They would go on an do QM without worrying too much about what it all means.
Yeah, I’m not making claims about what modest positions think about this issue. I’m also not endorsing a particular solution to the question of where the Born rule comes from (and Eliezer hasn’t endorsed any solution either, to my knowledge). I’m making two claims:
QM non-realism and objective collapse aren’t true.
As a performative corollary, arguments about QM non-realism and objective collapse are tractable, even for non-specialists; it’s possible for non-specialists to reach fairly confident conclusions about those particular propositions.
I don’t think either of those claims should be immediately obvious to non-specialists who completely reject “try to ignore object-level arguments”-style modesty, but who haven’t looked much into this question. Non-modest people should initially assign at least moderate probability to both 1 and 2 being false, though I’m claiming it doesn’t take an inordinate amount of investigation or background knowledge to determine that they’re true.
(Edit re Will’s question below: In the QM sequence, what Eliezer means by “many worlds” is only that the wave-function formalism corresponds to something real in the external world, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.)
Huh, he seemed fairly confident about endorsing MWI in his sequence here
He endorses “many worlds” in the sense that he thinks the wave-function formalism corresponds to something real and mind-independent, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.
From a 2008 post in the MWI sequence:
Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I’ve adopted a moderately modest approach to QM since I’ve not touched it in a bit and I expect the debate has moved on a bit.
We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.
The question is what happens when you criticize it and don’t get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?
I’m curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).
I don’t think we should describe all instances of deference to any authority, all uses of the outside view, etc. as “modesty”. (I don’t know whether you’re doing that here; I just want to be clear that this at least isn’t what the “modesty” debate has traditionally been about.)
I don’t think there’s any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you’ve understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.
In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we’d naively think it is, etc.), then I’d be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn’t answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers’ data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.
I also feel comfortable having lower probability in the existence of God than the average physicist does; and “physicists are the wrong kind of authority to defer to about God” isn’t the reasoning I go through to reach that conclusion.
Out of curiosity, what is the reasoning you would go through to reach that conclusion?
Heh, I’m in danger of getting nerd sniped into physics land, which would be a multiyear journey. I’m found myself trying to figure out whether the stories in this paper count as real macroscopic worlds or not (or hidden variables). And then I tried to figure out whether it matters or not.
I’m going to bow out here. I mainly wanted to point out that there are more possibilities than just believe in Copenhagen and believe in Everett.
Cool. Note the bet with Bryan Caplan was partly tongue-in-cheek; though it’s true Eliezer is currently relatively pessimistic about humanity’s chances.
From Eliezer on Facebook:
There’s something else you haven’t considered: People ARE already doing it. I live in Beijing and here you can easily buy grey market Hong Kong iphones for significantly cheaper than mainland versions. Competition has driven prices down, and the part of the value captured by the smugglers is far lower than the difference between the official prices in the mainland and HK. I’d be very surprised if the same wasn’t true in Argentina.
I agree that financial incentives/disincentives result in failures (ie. social problems) of all kinds. One of the biggest reasons, as I’m sure you mention at some point in your book, is corruption. ie. the beef/dairy industry pays off environmental NGOs and government to stay quiet about their environmental impact.
But don’t you think that non-financial rewards/punishment also play a large role in impeding social progress, in particular social rewards/punishment? ie. people don’t wear enough to stay warm in the winter because others will tease them for being uncool, people bully others because they are then respected more, etc.
Non-financial incentives clearly play a major role both in dysfunctional systems and in well-functioning ones. A lot of those incentives are harder to observe and quantify, though; and I’d expect them to vary more interpersonally, and to be harder to intervene on in cases like the Bank of Japan.
It isn’t so surprising if (say) key decisionmakers at the Bank of Japan cared more about winning the esteem of particular friends and colleagues at dinner parties, than about the social pressure from other people to change course; or if they cared more about their commitment to a certain ideology or self-image; or any number of other small day-to-day factors. Whereas it would be genuinely surprising if those commonplace small factors were able to outweigh a large financial incentive.