I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last decade I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
JustinShovelain
How confident are you that it affects mainly older people or those with preexisting health conditions? Are the stats solid now? I vaguely recall that SARS and MERS (possibly the relevant reference class), were age agnostic.
By total mortality rate do you mean total number of people eventually or do you mean percentage?
If the former I agree.
If you mean the later… I see it as a toss up between the selection effect of the more severely affected being the ones we know have it (and so decreasing the true mortality rate relative to the published numbers) and time for the disease to fully progress (and so increasing the true mortality rate relative to the published numbers).
Thanks for the article. One thing I’m wondering about that has implications for the large scale pandemic case is how much equipment for “mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation)” does society have and what are the consequences of not having access to such equipment? Would such people die? In that case the fatality rate would grow massively to something like 25 to 32%.
Whether there is enough equipment would depend upon how many get sick at once, can more than one person use the same equipment in an interleaved fashion, how long each sick person needs the equipment, are their good alternatives to the equipment, and how quickly additional equipment could be built or improvised.
So the case I’d be worried about here would be a very quick spread where you need rare expensive equipment to keep the fatality rate down where it is currently.
Safety regulators: A tool for mitigating technological risk
It’s true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I’d expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It’s lacking the examples and explicit connections though that make this salient. In a future post that I’ve got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I’ll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
Yes, the model in itself doesn’t say that we’ll tend towards competitiveness. That comes from the definition of competitiveness I’m using here and is similar to Robin Hanson’s suggestion. “Competitiveness” as used here just refers to the statistical tendency of systems to evolve in certain ways—it’s similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)
The ‘sprinting between oases’ strategy
The reason why we’d expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable… and not all of this is aligned with humanity’s ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don’t maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.
I agree with your thoughts. Competitiveness isn’t necessarily fully orthogonal to common good pressures but there generally is a large component that is, especially in tough cases.
If they are not orthogonal then they may reach some sort of equilibrium that does maximize competitiveness without decreasing common good to zero. However, in a higher dimensional version of this it becomes more likely that they are mostly orthogonal (apriori, more things are orthogonal in higher dimensional spaces) and if what is competitive can sorta change with time walking through dimensions (for instance moving in dimension 4 then 7 then 2 then...) and iteratively shifting (this is hard to express and still a bit vague in my mind) then competitiveness and common good may become more orthogonal with time.
The Moloch and Pareto optimal frontier idea is probably extendable to deal with frontier movement, dealing with non-orthogonal dimensions, deconfounding dimensions, expanding or restricting dimensionality, and allowing transformations to iteratively “leak” into additional dimensions and change the degree of “orthogonality.”
Nice! I would argue though that because we do not consider all dimensions at once generally speaking and because not all game theory situations (“games”) lend themselves to this dimensional expansion we may, for all practical purposes, sometime find ourselves in this situation.
Overall though, the idea of expanding the dimensionality does point towards one way to remove this dynamic.
Moloch and the Pareto optimal frontier
My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.
Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I’d like to see more articles like this.
One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The Crowdedness component generally implies diminishing or increasing marginal returns only vary by factors of thousands, from negative tens to positive thousands.
In summary the ranges are intuitively roughly:
Importance (util/%progress): (-10^9, 10^9)
Tractability (%progress/$): (-10^-6, 1)
Crowdedness adjustment factor ($/$in): (-10, 10^3)
Let’s assume interventions have randomly associated with them samples from probability distributions over these ranges. Roughly speaking then we should care about these factors based on the degree to which they help us clearly see which intervention is better than another.
The extent to which these let us distinguish between the value interventions is based on our uncertainty per factor for each intervention and how the value depends on each factor. Because the value is equal to Importance*Tractability*CrowdednessAdjustmentFactor each factor is treated the same (there is abstract symmetry). Thus we only need to consider how big each factor range is in terms of our typical intervention factor uncertainty. This then tells us how useful each factor is at distinguishing interventions based on importance.
Pulling numbers out the the intuitive hat for the typical intervention uncertainty I get:
Importance (util/%progress uncertainty unit): 10
Tractability (%progress/$ uncertainty unit): 10^-6
Crowdedness adjustment factor ($/$in uncertainty unit): 1
Dividing the ranges into these units lets us measure the distinguishing power of each factor:
Importance normalized range (distinguishing units): 10^8
Tractability normalized range (distinguishing units): 10^6
Crowdedness adjustment factor normalized range (distinguishing units): 10^3
As a rule of thumb then it looks like focusing on Importance is better than Tractability is better than Crowdedness. This lends itself to a sequence of improving heuristics for comparing the value of interventions then:
Importance only
Importance and Tractability
The full ITC framework
(The above analysis is only approximately correct and will depend on details like the precise probability distribution over interventions you’re comparing and your uncertainty distributions over interventions for each factor.
The ITC framework can be further extended in several ways like: making precise curves interventions on the factors of ITC, extending the detail of the analysis of resources to other possible bottlenecks like time and people, incorporating the ideas of comparative advantage and market places, …. I hope someone does this!)
(PS I’m thinking of making this into a short post and enjoy writing collaborations so if someone is interested send me an EA forum message.)
I wonder what sort of Fermi calculation we should apply to this? My quick (quite possibly wrong) numbers are:
P(it goes world scale pandemic) = 1⁄3, if I believe the exponential spreading math (hard to get my human intuition behind) and the long, symptom less, contagious incubation period
P(a particular person gets it | it goes world scale pandemic) = 1⁄3, estimating from similar events
P(a particular person dies from it | a particular person gets it) = 1⁄30, and this may be age or preexisting condition agnostic and could, speculatively, increase if vital equipment is too scarce (see other comment)
=> P(death of a randomly selected person from it) = ~1/300
What are your thoughts?