Do you mean Aristotle’s “Politics”?
Yes, I did. Whoops, fixed.
In general, yes, international relations is a complex adaptive system, and that could be relevant. But I’m just not sure how far the tools of complexity theory can get you in this domain. I would agree that complexity science approaches seem closely related to game theoretic rational actor models, where slight changes can lead to wildly different results, and they are unstable in the chaos theory / complexity sense. I discuss that issue briefly in the next post, now online, but as far as I am aware, complexity theory not a focus anywhere in international relations or political science. (If you have links that discuss it, I’d love to see them.)
Thanks! (Now fixed)
Good writeup, and cool tool. I may use it and/or point to it in the future.
I agree that when everything is already quantified, and you can do this. The chapter in HtMA is also fantastic. But it’s fairly rare that people have already quantified all of the relevant variables and properly explored what the available decisions are or what they would affect—and not doing so can materially change the VoI, and are far more important to do anyways.
That said, no, basic VoI isn’t hard. It’s just that the use case is fairly narrow, and the conceptual approach is incredibly useful in the remainder of cases, even those cases where actually quantifying everything or doing to math is incredibly complex or even infeasible.
I definitely see a wide variety of techniques used in applied public policy, as I said in the next paragraph. The work I did at RAND was very interdisciplinary, and drew on a wide variety of academic disciplines—but it was also decision support and applied policy analysis, not academic public policy.
And I was probably not generous enough about what types of methods are used in academic public policy—but my view is colored by the fact that the scope in many academic departments seems almost shockingly narrow compared to what I was used to, or even what seems reasonable. The academic side, meaning people I see going for tenure in public policy departments, seems to focus pretty narrowly on econometric methods for estimating impact of interventions. They also do ex-post cost benefit analyses, but those use econometric estimates of impact to estimate the benefits. And when academic ex-ante analysis is done, it’s usually part of a study using econometric or RCT estimates to project the impact.
Good to see more people thinking about this, but the vocabulary you say is needed already exists—look for things talking about “Global Catastrophic Risks” or “GCRs”.
A few other notes:
It would help if you embedded the images. (You just need to copy the image address from imgur.)
″ with a significant role played by their . ” ← ?
″ the ability for the future of our civilisation to deviate sufficiently from our set of values as to render this version of humanity meaningless from today’s perspective, similar to the ship of Theseus problem. ” ← I don’t think that’s a useful comparison.
I’m not sure exactly who was running things, but I assumed the work is related to / continued by FRI, given the overlap in people involved.
Seriously—start with the 5 pages I recommended, and that should give you enough information (VoI FTW!) to decide if you want to read Chapters 1 & 2 as well.
(But Chapters 3 and 4 really *are* irrelevant unless you happen to be designing a biosurveillance system or a terrorism threat early warning detection system that uses classified information.)
This is an area I should probably write more about, but I have a harder time being pithy, and haven’t tried to distill my thoughts enough. But since you asked....
As a first approximation, you want to first consider the plausible value of the decision. If it’s choosing a career, for example, the difference between a good choice and a bad one is plausibly a couple million dollars. You almost certainly don’t want to spend more than a small fraction of that gathering information, but you do want to spend up to, say, 5% on thinking about the decision. (Yes, I’d say spending a year or two exploring the options before picking a career is worthwhile, if you’re really uncertain—but you shouldn’t need to be. See below.)
Once you have some idea of what the options are, you should pick what about the different options are good or bad—or uncertain. This should form the basis of at least a pro/con list—which is often enough by itself. (See my simulation here.) If you see that one option is winning on that type of list, you should probably just pick it—unless there are uncertainties that would change your mind.
Next, list those key uncertainties. In the career example, these might include: Will I enjoy doing the work?, How likely am I to be successful in the area?, How likely is the field to continue to be viable in the coming decades?, and How easy or hard is it to transition into/out of?
Notice that some of the uncertainties matter significantly, and others don’t. We have a tool that’s useful for this, which is the theoretical maximum of VoI, called Value of Perfect Information. This is the difference in value between knowing the answer with certainty, and the current decision. (Note: not knowing the future with certainty, but rather knowing the correct answer to the uncertainty. For example, knowing that you would have a 70% chance of being successful and making tons of money in finance.) Now ask yourself: If I knew the answer, would it change my decision? If the answer is no, drop it from the list of key uncertainties. If a relatively small probability of success would still leave finance as your top option, because of career capital and the potentially huge payoff, maybe this doesn’t matter. Alternatively, if even a 95% chance of success wouldn’t matter because you don’t know if you’d enjoy it, it still doesn’t matter—so move on to other questions.
If the answer is that knowing the answer would change your mind, you need to ask what information you could plausibly get about the question, and how expensive it is. For instance, you currently think there’s a 50% chance you’d enjoy working in finance. Spending a summer interning would make you sure one way or the other—but the cost in time is very high. It might be worth it, but there are other possibilities. Spending 15 minutes talking to someone in the field won’t make you certain, but will likely change your mind to think the odds are 90% or 10% - and in the former case, you can still decide to apply for a summer internship, and in the latter case, you can drop the idea now.
You should continue with this process of finding key things that would change your mind until you either think that you’re unlikely to change your mind further, or the cost of more certainty is high enough compared to the value of the decision that it’s not obviously worth the investment of time and money. (If it’s marginal or unclear, unless the decision is worth tens or hundreds of millions of dollars, significant further analysis is costly enough that you may not want to do it. If you’re unsure which way to decide at that point, you should flip a coin about the eventual decision—and if you’re uncertain enough to use a coin flip, then just do the riskier thing.)
Yes, that was partially the conclusion of my dissertation—and see my response to the above comment.
From what I understand, Geoengineering is mostly avoided because people claim (incorrectly, in my view) it is a signal that the country thinks there is no chance to fix the problem by limiting emissions. In addition, people worry that it has lots of complex impacts we don’t understand. As we understand the impacts better, it becomes more viable—and more worrisome. And as it becomes clearer over the next 20-30 years that a lot of the impacts are severe, it becomes more likely to be tried.
I’ve heard “action relevant” used more often—but both are used.
Another potentially useful heuristic is to pick a research question where the answer is useful whether or not you find what you’d expect. For example, “Are house fires more frequent in households with one or more smokers?” is very decision relevant if the answer is “Far more likely,” but not useful if the answer is “No,” or “A very little bit.” (But if a questions is only relevant if you get an unlikely answer, it’s even less useful. For example, “How scared are Londoners of house fires?” is plausibly very decision relevant if the answer turns out to be “Not at all, and they take no safety measures”—but that’s very unlikely to be the answer.)
A better question might be “Which of the following behaviors or characteristics correlates with increased fire risk; presence of school-aged children, smoking, building age, or income?” Notice that this is more complex than the previous question, but if you’re gathering information about smoking, the other questions are relatively easy to find information about as well—and make the project much more likely to find something useful.
(The decision-theoretic optimal is questions that are decision-relevant in proportion to the likelihood you’ll find each answer. But even if it’s very valuable in expectation, from a career perspective, you don’t want to spend time on questions that have a good chance of being a waste of time, even if they have a small chance of being really useful—but this is a trade-off that requires reflection, because it leads people to take fewer risks, and from a social benefit perspective at least, most people take too few risks already.)
(Great idea. But I think this would work better if you had the top comment be just “Here for easy disagreement:” then had the sub comments be the ranges, so that the top comment could be upvoted for visibility.)
Edit: In case this isn’t clear, the parents was changed. Much better!
The other fairly plausible GCR that is discussed is biological. Black death likely killed 20% of the population (excluding the Americas, but not China or Africa, which we affected) in the middle ages. Many think that bioengineered pathogens or other threats could plausibly have similar effects now. Supervolcanos and asteroids are also on the list of potential GCRs, but we have better ideas about their frequency / probability.
Of course, Toby’s book will discuss all of this—and it’s coming out soon!
I agree overall. The best case I’ve heard for Climate Change as an indirect GCR, which seems unlikely but not at all implausible, is not about direct food shortages, but rather the following scenario:
Assume state use of geoengineering to provide cloud cover, reduce heat locally, or create rain. Once this is started, they will quickly depend on it as a way to mitigate climate change, and the population will near-universally demand that it continue. Given the complexity and global nature of weather, however, this is almost certain to create non-trivial effects on other countries. If this starts causing crop failures or deadly heat waves in the affected countries, they would feel justified escalating this to war, regardless of who would be involved—such conflicts could easily involve many parties. In such a case, in a war between nuclear powers, there is little reason to think they would be willing to stop a non-nuclear options.
You’d need to think there was a very significant failure of markets to assume that food supplies wouldn’t be adapted quickly enough to minimize this impact. That’s not impossible, but you don’t need central management to get people to adapt—this isn’t a sudden change that we need to prep for, it’s a gradual shift. That’s not to say there aren’t smart things that could significantly help, but there are plenty of people thinking about this, so I don’t see it as neglected of likely to be high-impact.