# Vasco Grilošø

Karma: 6,460
• 5 Aug 2024 12:33 UTC
2 points
0āā¶ā0

Thanks for the comment, James!

Whatās the main evidence base guiding this approach and whatās the expected increase in accuracy attendees can expect the course to have?

I am tagging @jsteinhardt in case he wants to reply.

# JaĀ­cob SteinĀ­hardtās foreĀ­castĀ­ing course lecĀ­ture notes

4 Aug 2024 7:14 UTC
22 points
(forecasting.quarto.pub)

# SoĀ­cial beĀ­havĀ­ior curves, equilĀ­ibria, and radicalism

30 Jul 2024 16:28 UTC
17 points
(ericneyman.wordpress.com)
• 30 Jul 2024 15:49 UTC
2 points
0āā¶ā0

Am I confident that someone born in 2024 canāt grow to be 242cm? Nope. I just donāt trust the statistical modeling all that much.
(If you disagree and are willing to offer 1,000,000:1 odds on this question, Iāll probably be willing to bet on it).

I do not want to take this bet, but I am open to other suggestions. For example, I think it is very unlikely that transformative AI, as defined in Metaculus, will happen in the next few years.

• 29 Jul 2024 22:48 UTC
4 points
0āā¶ā0

Thanks, Linch. Strongly upvoted.

Now for normal distributions, or normal-ish distributions, this may not matter all that much in practice. As you say āheight roughly follows a normal distribution,ā so as long as a distribution is ~roughly normal, some small divergences doesnāt get you too far away (maybe with a slightly differently shaped underlying distribution that fits the data itās possible to get a 242 cm human, maybe even 260 cm, but not 400cm, and certainly not 4000 cm).

Since height roughly follows a normal distribution, the probability of huge heights is negligible.

Right, by āthe probability of huge heights is negligibleā, I meant way more than 2.42 m, such that the details of the distribution would not matter. I would not get an astronomically low probability of at least such an height based on the methodology I used to get an astronomically low chance of a conflict causing human extinction. To arrive at this, I looked into the empirical tail distribution. I did not fit a distribution to the 25th to 75th range, which is probably what would have suggested a normal distribution for height, and then extrapolated from there. I said I got an annual probability of conflict causing human extinction lower than 10^-9 using 33 or less of the rightmost points of the tail distribution. The 33rd tallest person whose height was recorded was actually 2.42 m, which illustrates I would not have gotten an astronomically low probability for at least 2.42 m.

This is why I think itās important to be able to think about a problem from multiple angles.

I agree. What do you think is the annualised probability of a nuclear war or volcanic eruption causing human extinction in the next 10 years? Do you see any concrete scenarios where the probability of a nuclear war or volcanic eruption causing human extinction is close to Tobyās values?

I usually deploy this line [āany extremal distribution looks like a straight-line when drawn on a log-log plot with a fat markerā] when arguing against people who claim they discovered a power law when I suspect something like ~log-normal might be a better fit. But obviously it works in the other direction as well, the main issue is model uncertainty.

I think power laws overestimate extinction risk. They imply the probability of going from 80 M annual deaths to extinction would be the same as going from extinction to 800 billion annual deaths, which very much overestimates the risk of large death tolls. So it makes sense the tail distribution eventually starts to decay much faster than implied by a power law, especially if this is fitted to the left tail.

On the other hand, I agree it is unclear whether the above tail distribution suggests an annual probability of a conflict causing human extinction above/ābelow 10^-9. Still, even my inside view annual extinction risk from nuclear war of 5.53*10^-10 (which makes no use of the above tail distribution) is only 0.0111 % (= 5.53*10^-10/ā(5*10^-6)) of Tobyās value.

• Thanks, Bradley, and welcome to the EA Forum! Strongly upvoted.

Given that it is unlikely that incorporating humidity would decrease heat-related mortality, my own view here is that this pushes current estimates towards a lower bound.

If adequately modelling humidity would increase heat deaths, I wonder whether it would also decrease cold deaths, such that the net effects is unclear.

In practice, these assumptions limit the ability to model things like extreme heat waves and heat domes, which can cause large fatality spikes (e.g. figure below from Washington State in 2021). Missing these features in some locations might be akin to missing almost all the possible heat related mortality in cooler climates.

As illustrated below, deaths from extreme cold and heat accounted for only a tiny fraction of the deaths from non-optimal temperature in 2015 in some countries, which attenuates the effect you are describing.

• 27 Jul 2024 10:01 UTC
5 points
0āā¶ā4

Thanks for the comment, Linch.

Thatās an odd prior. I can see a case for a prior that gets you to <10^-6, maybe even 10^-9, but how can you get to substantially below 10^-9 annual with just historical data???

Fitting a power law to the N rightmost points of the tail distribution of annual conflict deaths as a fraction of the global population leads to an annual probability of a conflict causing human extinction lower than 10^-9 for N no higher than 33 (for which the annual conflict extinction risk is 1.72*10^-10), where each point corresponds to one year from 1400 to 2000. The 33 rightmost points have annual conflict deaths as a fraction of the global population of at least 0.395 %. Below is how the annual conflict extinction risk evolves with the lowest annual conflict deaths as a fraction of the global population included in the power law fit (the data is here; the post is here).

The leftmost points of the tail suggest a high extinction risk because the tail distribution is quite flat for very low annual conflict deaths as a fraction of the global population.

The extinction risk starts to decay a lot as one uses increasingly rightmost points of the tail because the actual tail distribution also decreases for high annual conflict deaths as a fraction of the global population.

Sapiens hasnāt been around for that long for longer than a million years! (and conflict with homo sapiens or other human subtypes still seems like a plausible reason for extinction of other human subtypes to me). There have only been maybe 4 billion species total in all of geological history! Even if you have almost certainty that literally no species has ever died of conflict, you still canāt get a prior much lower than 1ā4,000,000,000! (10^-9).

Interesting numbers! I think that kind of argument is too agnostic, in the sense it does not leverage the empirical evidence we have about human conflicts, and I worry it leads to predictions which are very off. For example, one could also argue the annual probability of a human born in 2024 growing to an height larger than the distance from the Earth to the Sun cannot be much lower than 10^-6 because Sapiens have only been around for 1 M years or so. However, the probability should be way way lower than that (excluding genetic engineering, very long light appendages, unreasonable interpretations of what I am referring to, like estimating the probability from the chance a spaceship with humans will collide with the Sun, etc.). One can see the probability of a (non-enhanced) human growing to such an height is much lower than 10^-6 based on the tail distribution of human heights. Since height roughly follows a normal distribution, the probability of huge heights is negligible. It might be the case that past human heights (conflicts) are not informative of future heights (conflicts), but past heights still seem to suggest an astronomically low chance of huge heights (conflicts causing human extinction).

It is also unclear from past data whether annual conflict deaths as a fraction of the global population will increase.

Below is some data on the linear regression of the logarithm of the annual conflict deaths as a fraction of the global population on the year.

As I have said:

There has been a slight downwards trend in the logarithm of the annual conflict deaths as a fraction of the global population, with the R^2 of the linear regression of it on the year being 8.45 %. However, it is unclear to me whether the sign of the slope is resilient against changes in the function I used to model the ratio between the Conflict Catalogās and actual annual conflict deaths.

• 26 Jul 2024 9:38 UTC
4 points
0āā¶ā0

I think I remain confused as to what you mean with āall deaths from non-optimal temperatureā.

I mean the difference between the deaths for the predicted and ideal temperature. From OWID:

The deaths from non-optimal temperature are supposed to cover all causes (temperature is a risk factor for death rather than a cause of death in GBD), not just extreme heat and cold (which only account for a tiny fraction of the deaths; see my last comment). I say āsupposedā because it is possible the mortality curves above are not being modelled correctly, and this applies even more to the mortality curves in the future.

So to me it seems you are saying āI donāt trust arguments about compounding risks and the data is evidence for thatā whereas the data is inherently not set up to include that concern and does not really speak to the arguments that people most concerned about climate risk would make.

My understanding is that (past/āpresent/āfuture) deaths from non-optimal temperature are supposed to include conflict deaths linked to non-optimal temperature. However, I am not confident these are being modelled correctly.

I was not clear, but in my last comment I mostly wanted to say that deaths from non-optimal temperature account for the impact of global warming not only on deaths from extreme heat and cold, but also on cardiovascular or kidney disease, respiratory infections, diabetes and all others (including conflicts). Most causes of death are less heavy-tailed than conflict deaths, so I assume we have a better understanding of how they change with temperature.

• 25 Jul 2024 19:52 UTC
2 points
0āā¶ā0

Thanks for this, Vasco, thought-provoking as always!

Likewise! Thanks for the thoughtful comment.

Insofar as is this a correct representation of your argument

It seems like a fair representation.

a. Dying from heat stress is a very extreme outcome and people will act in response to climate change much earlier than dying. For example, before people die from heat stress, they might abandon their livelihoods and migrate, maybe in large numbers.

b. More abstractly, the fact that an extreme impact outcome (heat death) is relatively rare is not evidence for low impact in general. Climate change pressures are not like a disease that kills you within days of exposure and otherwise has no consequence.

Agreed. However:

• I think migration will tend to decrease deaths because people will only want to migrate if they think their lives will improve (relative to the counterfactual of not migrating).

• The deaths from non-optimal temperature I mentioned are supposed to account for all causes of death, not just extreme heat and cold. According to GBD, in 2021, deaths from environmental heat and cold exposure were 36.0 k (I guess this is what you are referring to by heat stress), which was just 1.88 % (= 36.0*10^3/ā(1.91*10^6)) of the 1.91 M deaths from non-optimal temperature. My post is about how these 1.91 M deaths would change.

a. You seem to suggest we are very uncertain about many of the effect signs. I think the basic argument why people concerned about climate change would argue that changes will be negative and that there be compounding risks is because natural and human systems are adapted to specific climate conditions. That doesnāt mean they cannot adapt at all, but that does mean that we should expect it is more likely that effects are negative, at least as short-term shocks, than positive for welfare.

This makes sense. On the other hand, one could counter global warming will be good because:

• There are more deaths from low temperature than from high temperature.

• The disease burden per capita from non-optimal temperature has so far been decreasing (see 2nd to last graph).

b. I think a lot of the other arguments on the side of āindirect risks are lowā you cite are ultimately of the form (i) āindirect effects in other causes are also largeā or (ii) āpointing to indirect effects make things inscrutable and unverifiableā. (i) might be true but doesnāt matter, I think, for the question of whether warming is net-bad and (ii) is also true, but does nothing by itself on whether those indirect effects are realāwe can live in a world where indirect effects are rhetorically abused and still exist and indeed dominate in certain situations!

Agreed. I would just note that i) can affect prioritisation across causes.

• 25 Jul 2024 19:06 UTC
4 points
0āā¶ā4
in reply to: Stephen Clareās comment

Thanks for the comment, Stephen.

Vasco, how do your estimates account for model uncertainty?

I tried to account for model uncertainty assuming 10^-6 probability of human extinction given insufficient calorie production.

I donāt understand how you can put some probability on something being possible (i.e. p(extinction|nuclear war) > 0), but end up with a number like 5.93e-12 (i.e. 1 in ~160 billion). That implies an extremely, extremely high level of confidence.

Note there are infinitely many orders of magnitude between 0 and any astronomically low number like 5.93e-12. At least in theory, I can be quite uncertain while having a low best guess. I understand greater uncertainty (e.g. higher ratio between the 95th and 5th percentile) holding the median constant tends to increase the mean of heavy-tailed distributions (like lognormals), but it is unclear to which extent this applies. I have also accounted for that by using heavy-tailed distributions whenever I thought appropriate (e.g. I modelled the soot injected into the stratosphere per equivalent yield as a lognormal).

As a side note, 10 of 161 (6.21 %) forecasters of the Existential Risk Persuasion Tournament (XPT), 4 experts and 6 superforecasters, predicted a nuclear extinction risk until 2100 of exactly 0. I guess these participants know the risk is higher than 0, but consider it astronomically low too.

Putting ~any weight on models that give higher probabilities would lead to much higher estimates.

I used to be persuaded by this type of argument, which is made in many contexts by the global catastrophic risk community. I think it often misses that the weight a model should receive is not independent of its predictions. I would say high extinction risk goes against the low prior established by historical conflicts.

I am also not aware of any detailed empirical quantitative models estimating the probability of extinction due to nuclear war.

# FuĀ­ture deaths from non-opĀ­tiĀ­mal temĀ­perĀ­aĀ­ture and cost-effecĀ­tiveĀ­ness of stratoĀ­spheric aerosol injection

25 Jul 2024 16:50 UTC
20 points
• 24 Jul 2024 21:36 UTC
17 points
3āā¶ā3

Thanks for the update, Toby. I used to defer to you a lot. I no longer do. After investigating the risks myself in decent depth, I consistently arrived to estimates of the risk of human extinction orders of magnitude lower than your existential risk estimates. For example, I understand you assumed in The Precipice an annual existential risk for:

• Nuclear war of around 5*10^-6 (= 0.5*10^-3/ā100), which is 843 k (= 5*10^-6/ā(5.93*10^-12)) times mine.

• Volcanoes of around 5*10^-7 (= 0.5*10^-4/ā100), which is 14.8 M (= 5*10^-7/ā(3.38*10^-14)) times mine.

In addition, I think the existential risk linked to the above is lower than their extinction risk. The worst nuclear winter of Xia et. al 2022 involves an injection of soot into the stratosphere of 150 Tg, which is just 1 % of the 15 Pg of the CretaceousāPaleogene extinction event. Moreover, I think this would only be existential with a chance of 0.0513 % (= e^(-10^9/ā(132*10^6))), assuming:

• An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:

• An exponential distribution with a mean of 66 M years describes the time between:

• 2 consecutive such catastrophes.

• i) and ii) if there are no such catastrophes.

• Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1ā2).

• Consequently, one should expect the time between i) and ii) to be 2 times (= 1ā0.50) as long as that if there were no such catastrophes.

• An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

• 24 Jul 2024 15:48 UTC
2 points
0āā¶ā0

Hi David,

Based on my adjustments to CEARCHās analysis of nuclear and volcanic winter, the expected annual mortality of nuclear winter as a fraction of the global population is 7.32*10^-6. I estimated the deaths from the climatic effects would be 1.16 times as large as the ones from direct effects. In this case, the expected annual mortality of nuclear war as a fraction of the global population would be 1.86 (= 1 + 1ā1.16) times the expected annual mortality of nuclear winter as a fraction of the global population, i.e. 0.00136 %(= 1.86*7.32*10^-6). So the annual losses in future potential mentioned in the table above are 221 (= 0.0030/ā(1.36*10^-5)) and 73.5 (= 0.0010/ā(1.36*10^-5)) times my expected annual death toll, whereas I would have expected the annual loss in future potential to be much lower than the expected annual death toll.

• 24 Jul 2024 15:22 UTC
5 points
0āā¶ā0
in reply to: Matt Boydās comment

Great points, Matt.

I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.

In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise ācomputer riskā over āLLM riskā to the extent the ratio between the cost-effectiveness of ācomputer risk interventionsā and āLLM risk interventionsā is proportional to the ratio between the scale of ācomputer riskā and āLLM riskā, which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].

To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.

1. ^

āComputer riskā, āLLM riskā, ācomputer risk interventionsā and āLLM risk interventionsā.

• 23 Jul 2024 14:33 UTC
2 points
0āā¶ā0
in reply to: Will Howardš¹ās comment

Here is an example with text in a table aligned to the left (select all text ā cell properties ā table cell text alignement).

• Thanks for the post! I wonder whether it would also be good to have public versions of the applications (sensible information could be redacted), as Manifund does, which would be even less costly than having external reviewers.