I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last decade I have worked with MIRI, CFAR, EA Global, and Founders Fund, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
JustinShovelain
Other perspectives that are arguably missing or extensions that can be done are:
Side effect analysis and modelling the entire future trajectory (generalizations of these can be created):
Refining our estimates of value given uncertainties about the future:
Specialization for a particular problem, with ensuing accuracy improvements can be done by causal analysis, Fermi modelling, Bayesian modelling, and inspired by https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be
The value to action chain idea can be used to substantiate value more clearly: https://forum.effectivealtruism.org/posts/Ekzvat8FbHRiPLn9Z/the-values-to-actions-decision-chain-a-lens-for-improving
One can even try to derive things directly from expected utility all the way to actions but the mathematics gets pretty complex quickly (I did some work on this in 2015 and am sitting on a draft paper that someday somewhere I’ll publish. It is beyond my current ability to efficiently explain clearly to a large audience.)
Here also is an additional post analyzing the ITN framework: https://forum.effectivealtruism.org/posts/fR55cjoph2wwiSk8R/formalizing-the-cause-prioritization-framework
Update from Convergence Analysis
In July, we published the following research posts:
Improving the future by influencing actors’ benevolence, intelligence, and power: This post outlines a framework for coming up with, and assesses the expected value of, actions to improve the long-term future, and discusses nine implications of this framework. --We were excited to see this framework already drawn on in two new forum posts by other authors (1,2).
Moral circles: Degrees, dimensions, visuals: This post overviews the classic concept of moral circles, discusses two important complexities that conception overlooks or fails to make explicit, and how they can be represented visually. --This post also led to several researchers getting in contact with Michael Aird (its author) and then being connected with each other.
Crucial questions for longtermists: This introduces a collection of “crucial questions for longtermists”: important questions about the best strategies for improving the long-term future. This collection is intended to serve as an aide to thought and communication, a kind of research agenda, and a kind of structured reading list.--In August, we followed up with a post focusing specifically on Crucial questions about optimal timing of work and donations
Additionally, our Researcher/Writer Michael Aird published:
Thanks for writing the post! I think we need a lot more strategy research, cause prioritization being one of the most important types, and that is why we founded Convergence Analysis (theory of change and strategy, our site, and our publications). Within our focus of x-risk reduction we do cause prioritization, describe how to do strategy research, and have been working to fill the EA information hazard policy gap. We are mostly focused on strategy research as a whole which lays the groundwork for cause prioritization. Here are some of our articles:
Heuristics for cause prioritization and assessing interventions
Components of strategy research (one of which is cause prioritization and it is dependent upon the others)
We’re small and relatively new group and we’d like to see more people and groups do this type of research and that this field get more support and grow. There is a vast amount to do and immense opportunity in doing good with this type of research.
Improving the future by influencing actors’ benevolence, intelligence, and power
Nice post!
Here are a couple additional posts that I think are worth checking out by Gwern:
https://www.lesswrong.com/posts/ktr39MFWpTqmzuKxQ/notes-on-psychopathy
https://www.lesswrong.com/posts/Ft2Cm9tWtcLNFLrMw/notes-on-the-psychology-of-power
Causal diagrams of the paths to existential catastrophe
State Space of X-Risk Trajectories
Following Sean here I’ll also describe my motivation for taking the bet.
After Sean suggested the bet, I felt as if I had to take him up on it for group epistemic benefit; my hand was forced. Firstly, I wanted to get people to take the nCOV seriously and to think thoroughly about it (for the present case and for modelling possible future pandemics) - from an inside view model perspective the numbers I was getting are quite worrisome. I felt that if I didn’t take him up on the bet people wouldn’t take the issue as seriously, nor take explicitly modeling things themselves as seriously either. I was trying to socially counter what sometimes feels like a learned helplessness people have with respect to analyzing things or solving problems. Also, the EA community is especially clear thinking and I think a place like the EA forum is a good medium for problem solving around things like nCOV.
Secondly, I generally think that holding people in some sense accountable for their belief statements is a good thing (up to some caveats); it improves the collective epistemic process. In general I prefer exchanging detailed models in discussion rather than vague intuitions mediated by a bet but exchanging intuitions is useful. I also generally would rather make bets about things that are less grim and wouldn’t have suggested this bet myself, but I do think that it is important that we do make predictions about things that matter and some of those things are rather grim. In grim bets though we should definitely pay attention to how something might appear to parts of the community and make more clear what the intent and motivation behind the bet is.
Third, I wished to bring more attention and support to the issue in the hope that it causes people to take sensible personal precautions and that perhaps some of them can influence how things progress. I do not entirely know who reads this and some of them may have influence, expertise, or cleverness they can contribute.
Nice find! Hopefully it updates soon as we learn more. What is your interpretation of it in terms of mortality rate in each age bracket?
Sure, I’ll take the modification to option (i). Thanks Sean.
Four components of strategy research
The bet is on.
Hmm… I will take you up on a bet at those odds and with those resolution criteria. Let’s make it 50 GBP of mine vs 250 GBP of yours. Agreed?
I hope you win the bet!
(note: I generally think it is good for the group epistemic process for people to take bets on their beliefs but am not entirely certain about that.)
Good points! I agree but I’m not sure how significant those effects will be though… Have an idea of how we’d in a principled precise way update based on those effects?
Updating the Fermi calculation somewhat:
P(it goes world scale pandemic) = 1⁄3, no updates (the metaculus estimate reference in another comment counteracted my better first principles estimation)
P(a particular person gets it | it goes world scale pandemic) = 1⁄2, updating based on the reproduction number of the virus
P(a particular person dies from it | a particular person gets it) = 0.09, updating based on a guess of 1⁄2 probability rare equipment is needed and a random guess of 1⁄2 probability fatality without it. 1/2*1/30 + 1/2*((Probability of pneumonia: (1/3+1/4 )*1/2)*(Probability of fatality given pnemonia and rare equipment is needed: 1⁄2)
=> P(death of a randomly selected person from it) = ~1/67
I’m not entirely sure what to think of the numbers; I cannot deny the logic but it’s pretty grim and I hope I’m missing some critical details, my intuitions are wrong, or unknown unknowns make things more favorable.
Hopefully future updates and information resolves some of the uncertainties here and makes the numbers less grim. One large uncertainty is how the virus will evolve in time.
Nice list!
Adding to it a little:
Avoid being sick with two things at once or being sick with something else immediately before.
When it comes to supplements the evidence and effect sizes are not that strong. Referencing examine.com and what I generally remember, I roughly think that the best immune system strengthening supplements would be zinc and echinacea with maybe mild effects from other things like vitamin C, vitamin D, and whey protein. There may be a couple additional herbs that could do something but it’s unclear they are safe to take for a long duration. What you’d aim for is decreasing the severity of viral pnemonia induced by something like influenza.
It’s possible that some existing antivirals will be helpful but currently this is unknown.
The exponential growth curve and incubation period also have implications about “bugging out” strategies where you get food and water, isolate, and wait for it to be over. Let’s estimate again:
Assuming as in the above comment we are 1⁄3 of the exponential climb (in reported numbers) towards the total world population and it took a month, in two more months (the end of March) we would expect it to reach saturation. If the infectious incubation period is 2 weeks (and people are essentially uniformly infectious during that time) then you’d move the two month date forward by two weeks (the middle of March). Assuming you don’t want to take many risks here you might have a week buffer in front (the end of the first week of March). Finally, after symptoms arise people may be infectious for a couple weeks (I believe this is correct, anyone have better data?). So the sum total amount of time for the isolation strategy is about 5 weeks (and may start as early as the end of the first week of March or earlier depending on transportation and supply disruptions).
Governments by detecting cases early or restricting travel, and citizens by isolating and using better hygiene, could change these numbers and dates.
(note: for future biorisks that may be more severe this reasoning is also useful)
I base it on what Greg mentions in his reply about the swine flu and also the reasoning that the reproduction number has to go below 1 for it to stop spreading. If its normal reproduction number before people have become immune (after being sick) is X (like 2 say), then to get the reproduction number below 1, (susceptible population proportion) * (normal reproduction number) < 1. So with a reproduction number of 2 the proportion who get infected will be 1⁄2.
This assumes that people have time to become immune so for a fast spreading virus more than that proportion would fall ill (note thought that pointing in the opposite direction is the effect that not everyone is uniformly likely to get ill though because some people are in relative isolation or have very good hygiene).
It’s based on a few facts and swirling them around in my intuition to choose a single simple number.
Long invisible contagious incubation period (seems somewhat indicated but maybe is wrong) and high degree of contagiousness (the Ro factor) implies it is hard to contain and should spread in the network (and look something like probability spreading in a Markov chain with transition probabilities roughly following transportation probabilities).
The exponential growth implies that we are only a few doublings away from world scale pandemic (also note we’re probably better at stopping things when their at small scale). In the exponential sense, 4,000 is half way between 1 and 8 million and about a third of the way to world population.
Nice succinct post.
A related reference you might like https://www.lesswrong.com/posts/NjYdGP59Krhie4WBp/updating-utility-functions that goes into getting it to care about what we want before it knows what we want.