I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last decade I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
JustinShovelain
My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.
Nice! I would argue though that because we do not consider all dimensions at once generally speaking and because not all game theory situations (“games”) lend themselves to this dimensional expansion we may, for all practical purposes, sometime find ourselves in this situation.
Overall though, the idea of expanding the dimensionality does point towards one way to remove this dynamic.
I agree with your thoughts. Competitiveness isn’t necessarily fully orthogonal to common good pressures but there generally is a large component that is, especially in tough cases.
If they are not orthogonal then they may reach some sort of equilibrium that does maximize competitiveness without decreasing common good to zero. However, in a higher dimensional version of this it becomes more likely that they are mostly orthogonal (apriori, more things are orthogonal in higher dimensional spaces) and if what is competitive can sorta change with time walking through dimensions (for instance moving in dimension 4 then 7 then 2 then...) and iteratively shifting (this is hard to express and still a bit vague in my mind) then competitiveness and common good may become more orthogonal with time.
The Moloch and Pareto optimal frontier idea is probably extendable to deal with frontier movement, dealing with non-orthogonal dimensions, deconfounding dimensions, expanding or restricting dimensionality, and allowing transformations to iteratively “leak” into additional dimensions and change the degree of “orthogonality.”
The reason why we’d expect it to maximize competitiveness is in the sense that: what spreads spreads, what lives lives, what is able to grow grows, what is stable is stable… and not all of this is aligned with humanity’s ultimate values; the methods that sometimes maximize competitiveness (like not internalizing external costs, wiping out competitors, all work and no play) much of the time don’t maximize achieving our values. What is competitive in this sense is however dependent on the circumstances and hopefully we can align it better. I hope this clarifies.
Yes, the model in itself doesn’t say that we’ll tend towards competitiveness. That comes from the definition of competitiveness I’m using here and is similar to Robin Hanson’s suggestion. “Competitiveness” as used here just refers to the statistical tendency of systems to evolve in certain ways—it’s similar to the statement that entropy tends to increase. Some of those ways are aligned with our values and others are not. In making the axes orthogonal I was using the, probably true, assumption that most ways of system evolution are not in alignment with our values.
(With the reply I was trying to point in the direction of this increasing entropy like definition.)
It’s true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I’d expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It’s lacking the examples and explicit connections though that make this salient. In a future post that I’ve got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I’ll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.
Thanks for the article. One thing I’m wondering about that has implications for the large scale pandemic case is how much equipment for “mechanical ventilation and sometimes ECMO (pumping blood through an artificial lung for oxygenation)” does society have and what are the consequences of not having access to such equipment? Would such people die? In that case the fatality rate would grow massively to something like 25 to 32%.
Whether there is enough equipment would depend upon how many get sick at once, can more than one person use the same equipment in an interleaved fashion, how long each sick person needs the equipment, are their good alternatives to the equipment, and how quickly additional equipment could be built or improvised.
So the case I’d be worried about here would be a very quick spread where you need rare expensive equipment to keep the fatality rate down where it is currently.
By total mortality rate do you mean total number of people eventually or do you mean percentage?
If the former I agree.
If you mean the later… I see it as a toss up between the selection effect of the more severely affected being the ones we know have it (and so decreasing the true mortality rate relative to the published numbers) and time for the disease to fully progress (and so increasing the true mortality rate relative to the published numbers).
How confident are you that it affects mainly older people or those with preexisting health conditions? Are the stats solid now? I vaguely recall that SARS and MERS (possibly the relevant reference class), were age agnostic.
I wonder what sort of Fermi calculation we should apply to this? My quick (quite possibly wrong) numbers are:
P(it goes world scale pandemic) = 1⁄3, if I believe the exponential spreading math (hard to get my human intuition behind) and the long, symptom less, contagious incubation period
P(a particular person gets it | it goes world scale pandemic) = 1⁄3, estimating from similar events
P(a particular person dies from it | a particular person gets it) = 1⁄30, and this may be age or preexisting condition agnostic and could, speculatively, increase if vital equipment is too scarce (see other comment)
=> P(death of a randomly selected person from it) = ~1/300
What are your thoughts?
It’s based on a few facts and swirling them around in my intuition to choose a single simple number.
Long invisible contagious incubation period (seems somewhat indicated but maybe is wrong) and high degree of contagiousness (the Ro factor) implies it is hard to contain and should spread in the network (and look something like probability spreading in a Markov chain with transition probabilities roughly following transportation probabilities).
The exponential growth implies that we are only a few doublings away from world scale pandemic (also note we’re probably better at stopping things when their at small scale). In the exponential sense, 4,000 is half way between 1 and 8 million and about a third of the way to world population.
I base it on what Greg mentions in his reply about the swine flu and also the reasoning that the reproduction number has to go below 1 for it to stop spreading. If its normal reproduction number before people have become immune (after being sick) is X (like 2 say), then to get the reproduction number below 1, (susceptible population proportion) * (normal reproduction number) < 1. So with a reproduction number of 2 the proportion who get infected will be 1⁄2.
This assumes that people have time to become immune so for a fast spreading virus more than that proportion would fall ill (note thought that pointing in the opposite direction is the effect that not everyone is uniformly likely to get ill though because some people are in relative isolation or have very good hygiene).
The exponential growth curve and incubation period also have implications about “bugging out” strategies where you get food and water, isolate, and wait for it to be over. Let’s estimate again:
Assuming as in the above comment we are 1⁄3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you’d move the two month date forward by two weeks (the middle of March). Assuming you don’t want to take many risks here you might have a week buffer in front (the end of the first week of March). Finally, after symptoms arise people may be infectious for a couple weeks (I believe this is correct, anyone have better data?). So the sum total amount of time for the isolation strategy is about 5 weeks (and may start as early as the end of the first week of March or earlier depending on transportation and supply disruptions).
Governments by detecting cases early or restricting travel, and citizens by isolating and using better hygiene, could change these numbers and dates.
(note: for future biorisks that may be more severe this reasoning is also useful)
Nice list!
Adding to it a little:
Avoid being sick with two things at once or being sick with something else immediately before.
When it comes to supplements the evidence and effect sizes are not that strong. Referencing examine.com and what I generally remember, I roughly think that the best immune system strengthening supplements would be zinc and echinacea with maybe mild effects from other things like vitamin C, vitamin D, and whey protein. There may be a couple additional herbs that could do something but it’s unclear they are safe to take for a long duration. What you’d aim for is decreasing the severity of viral pnemonia induced by something like influenza.
It’s possible that some existing antivirals will be helpful but currently this is unknown.
Updating the Fermi calculation somewhat:
P(it goes world scale pandemic) = 1⁄3, no updates (the metaculus estimate reference in another comment counteracted my better first principles estimation)
P(a particular person gets it | it goes world scale pandemic) = 1⁄2, updating based on the reproduction number of the virus
P(a particular person dies from it | a particular person gets it) = 0.09, updating based on a guess of 1⁄2 probability rare equipment is needed and a random guess of 1⁄2 probability fatality without it. 1/2*1/30 + 1/2*((Probability of pneumonia: (1/3+1/4 )*1/2)*(Probability of fatality given pnemonia and rare equipment is needed: 1⁄2)
=> P(death of a randomly selected person from it) = ~1/67
I’m not entirely sure what to think of the numbers; I cannot deny the logic but it’s pretty grim and I hope I’m missing some critical details, my intuitions are wrong, or unknown unknowns make things more favorable.
Hopefully future updates and information resolves some of the uncertainties here and makes the numbers less grim. One large uncertainty is how the virus will evolve in time.
Good points! I agree but I’m not sure how significant those effects will be though… Have an idea of how we’d in a principled precise way update based on those effects?
Hmm… I will take you up on a bet at those odds and with those resolution criteria. Let’s make it 50 GBP of mine vs 250 GBP of yours. Agreed?
I hope you win the bet!
(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)
The bet is on.
Sure, I’ll take the modification to option (i). Thanks Sean.
Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I’d like to see more articles like this.
One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The Crowdedness component generally implies diminishing or increasing marginal returns only vary by factors of thousands, from negative tens to positive thousands.
In summary the ranges are intuitively roughly:
Importance (util/%progress): (-10^9, 10^9)
Tractability (%progress/$): (-10^-6, 1)
Crowdedness adjustment factor ($/$in): (-10, 10^3)
Let’s assume interventions have randomly associated with them samples from probability distributions over these ranges. Roughly speaking then we should care about these factors based on the degree to which they help us clearly see which intervention is better than another.
The extent to which these let us distinguish between the value interventions is based on our uncertainty per factor for each intervention and how the value depends on each factor. Because the value is equal to Importance*Tractability*CrowdednessAdjustmentFactor each factor is treated the same (there is abstract symmetry). Thus we only need to consider how big each factor range is in terms of our typical intervention factor uncertainty. This then tells us how useful each factor is at distinguishing interventions based on importance.
Pulling numbers out the the intuitive hat for the typical intervention uncertainty I get:
Importance (util/%progress uncertainty unit): 10
Tractability (%progress/$ uncertainty unit): 10^-6
Crowdedness adjustment factor ($/$in uncertainty unit): 1
Dividing the ranges into these units lets us measure the distinguishing power of each factor:
Importance normalized range (distinguishing units): 10^8
Tractability normalized range (distinguishing units): 10^6
Crowdedness adjustment factor normalized range (distinguishing units): 10^3
As a rule of thumb then it looks like focusing on Importance is better than Tractability is better than Crowdedness. This lends itself to a sequence of improving heuristics for comparing the value of interventions then:
Importance only
Importance and Tractability
The full ITC framework
(The above analysis is only approximately correct and will depend on details like the precise probability distribution over interventions you’re comparing and your uncertainty distributions over interventions for each factor.
The ITC framework can be further extended in several ways like: making precise curves interventions on the factors of ITC, extending the detail of the analysis of resources to other possible bottlenecks like time and people, incorporating the ideas of comparative advantage and market places, …. I hope someone does this!)
(PS I’m thinking of making this into a short post and enjoy writing collaborations so if someone is interested send me an EA forum message.)