I’m a quantitative biologist with a background in evolutionary theory, microbiome data science, and metagenomics methods development. I co-lead the [Nucleic Acid Observatory project](https://naobservatory.org), which seeks to develop a metagenomics-based early warning system for future pandemics.
mike_mclaren
Consider funding the Nucleic Acid Observatory to Detect Stealth Pandemics
Excellent post; I did not read it carefully enough to evaluate much of the details, but these are all things we are concerned with at the Nucleic Acid Observatory and I think your three “Reasons” is a great breakdown of the core issues.
In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler’s Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven’t actually read this book but believe it is the best known book of this sort. Here’s another similar-seeming book by Dan Sloughter.
mike_mclaren’s Quick takes
Predicting Virus Relative Abundance in Wastewater
Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats
Great!
Potentially—this is something myself and others working on metagenomic monitoring have discussed and would like to investigate the practicalities of. If anyone has connections to international airlines or knows about the legalities/ownership of airline waste, I’d be interested in chatting.
It seems that Sandberg is discussing something like this typology in https://www.youtube.com/watch?v=Wn2vgQGNI_c
Edit: Sandberg starts talking about three categories of hazards at ~12:00
I see, thank you!
Hi Ajeya, thanks for doing this and for your recent 80K interview! I’m trying to understand what assumptions are needed for the argument you raise in the podcast discussion on fairness agreements that a longtermist worldview should have been willing to trade up all its influence for ever-larger potential universe. There are two points I was wondering if you could comment on if/how these align with your argument.
-
My intuition says that the argument requires a prior probability distribution on universe size that has an infinite expectation, rather than just a prior with non-zero probability on all possible universe sizes with a finite expectation (like a power-law distribution with k > 2).
-
But then I figured that even in a universe that was literally infinite but had a non-zero density of value-maximizing civilizations, the amount of influence over that infinite value that any one civilization or organization has might still be finite. So I’m wondering if what is needed to be willing to trade up for influence over ever larger universes is actually something like the expectation E[V/n] being infinite, where V = total potential value in universe and n = number of value-maximizing civilizations.
-
I have very little skin in the game here, as I don’t personally have a strong desire for an acronym...but my 2 cents are that “Reasoning carefully” can be shortened to “Reasoning” (or “Reason”) for this purpose with no loss—the “careful” part is implied. And I think I identify more with the idea of using careful reasoning than rationality. “Reason(ing)” also matches an existing short definition of EA as “Using reason and evidence to do the most good” (currently the page title for effectivealtruism.org)
Non-itemizing US taxpayers can deduct $300 of their 2020 donations
Thanks for the post! This is just the type of thinking I wanted to do this morning, and I’m finding it and the spreadsheet template a useful motivator.
The given “Dive In” link is broken; I think the correct one is http://mindingourway.com/dive-in-2/
Thanks for your response and the link to your newer post and the Ord and Hanson refs. I’ll just add a thought I had while reading
This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other “aspects” of MVP, I also have “What population size is required for economic specialisation, technological development, etc.?”
It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation...
This all makes sense, but sounds to me like to be at risk of leaving out the population/conservation biology perspective (beyond genetic considerations). A large part of what motivated me to write my original post is that I do think it is indeed valuable to use frameworks from population and conservation biology to study human extinction risk - but it is important to include all factors identified in those fields as being important; namely, environmental and demographic stochasticity, as well as habitat fragmentation and degradation, which could pose much greater risks than inbreeding and genetic drift.
Thanks for writing this post! I enjoyed looking over these, many of which I have also been puzzling about.
What’s the minimum viable human population (from the perspective of genetic diversity)?
After seeing this question picked up here I thought I would share some quick thoughts from the perspective of a person with a population biology/evolution background. I think this is a reasonable question to ask, but I suspect is not as important as the other factors that go into the broader question of what is the minimum population size from which humanity is likely to recover, period. Genetics are just one factor and probably not the most important when we consider the probability of recovery after a severe drop in global population.
Suppose that after some catastrophic event the population of humanity has suddenly dropped to a much smaller and more fragmented global population, e.g. 10000 individuals scattered in ~100 groups of 100 each across the globe. While the population size is small, it will be particularly susceptible to going extinct due to random fluctuations in population size. The population size could remain stationary or gradually decline, until eventually a random event causes extinction. Or it could start increasing, until eventually it is large enough to be robust to extinction from a random event.
The idea of a minimum viable population size (MVP) from a purely genetic perspective is that, since small populations are predicted to have lower average genetic fitness due to an increase in the expression of recessive deleterious mutations (“inbreeding depression”), an increased fixation of deleterious mutations in the population, or a lack of genetic variation that would allow adaptation to environment, there is in theory a population size small enough where a population would decline and go extinct due to low genetic fitness.
But in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.
Another way in which the concept of a MVP is too simplistic is that it is defined with respect to a genetic “equilibrium”—it assumes that conditions have been stable enough that there is a constant level of genetic variation in the population. However, after a sudden population decline, we would be far from equilibrium—we would still have lots of genetic variation from the time the population was large. This variation would start to decay, but as different local populations become fixed for different variants, much of this variation would be maintained at the global level and could be converted back into local variation by small amounts of migration. Such considerations are not usually included in MVP considerations. (Some collaborators and I have written about this last point at it relates to conserving endangered species here)
Perhaps we should keep the term “minimum viable population size” but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.
- EA Forum Prize: Winners for September 2020 by 5 Nov 2020 6:23 UTC; 17 points) (
- 27 Dec 2020 3:25 UTC; 4 points) 's comment on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? by (
I see, thanks for the explanation!
I’d be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.
My understanding is that Toby Ord does just this in his new book The Precipice (his new AI x-risk estimate is also discussed in his recent 80K podcast interview about the book), though it would still be good to have others weigh in.
Thanks for the post! I wanted to add a clarification regarding the discussion of metagenomic sequencing,
China does have metagenomic sequencing. In fact, metagenomic sequencing was used in China to help identify the presence of a new coronavirus in the early Covid patient samples (https://www.nature.com/articles/s41586-020-2008-3),