Brazilian legal philosopher, postdoc in intergenerational justice, financial supervisor, GWWC Pledger Bachelor of Laws, Master and Doctor of Philosophy from the Federal University of Rio Grande do Sul (UFRGS), having published articles and translations in the areas of Political Philosophy, Applied Ethics and Philosophy of Economics – with a recent focus on climate risks, Environmental and Social Responsibility, and intergenerational justice. Post-Doctoral Researcher at the Institute of Philosophy, Faculty of Social and Human Sciences, Universidade Nova de Lisboa, integrating the Ethics and Political Philosophy Laboratory (EPLAB) and the project Present Democracy for Future Generations. Also a member of the Graduate Committee and Special Studies Analyst in the area of supervision of non-banking institutions at the Central Bank of Brazil (BCB). Member of the Inclusive and Sustainable Solutions association (SIS) and of the Effective Altruism community in Brazil (AE Brasil). https://philpeople.org/profiles/ramiro-avila-peres
Ramiro
Time to cancel my Asterisk subscription?
So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?
Anyone else consders the case of Verein KlimaSeniorinnen Schweiz and Others v. Switzerland (application no. 53600⁄20) of the European Court of Human Rights a possibly useful for GCR litigation?
I never said Stoics reflected on GCR
Elon Musk? So last year… 2024 is time for Trump scandals.
Let’s buy some Truth shares and produce new scandals!
you can totally have scandals involving dead or imaginary people. So, definitely no.
I’m not sure where is the best place to share this, but I just received a message from GD that made think of Wenar’s piece: John Cena warns us against giving cash with conditions | GiveDirectly (by Tyler Hall)
Ricky Stanicky is a comedy about three buddies who cover for their immature behavior by inventing a fictitious friend ‘Ricky’ as an alibi. [...]When their families get suspicious, they hire a no-name actor (played by John Cena) to bring ‘Ricky’ to life, but an incredulous in-law grills Ricky about a specific Kenyan cash transfer charity he’d supposedly worked for. Luckily, actor Ricky did his homework on the evidence.
So I just replied GD asking:
Did John Cena authorize you to say things like “Be like John Cena and give directly”? Or this is legally irrelevant?D’you notice that you’re using a fraudster as an example?
Even if one accepts that what Cena’s character (Stanicky-Rod) is true, he’s misleading other people; so the second thing that should come to mind when one reads your message is “so what makes me confident that GD is not lying to me, too?”At least add some lines to assure your donors (maybe you see them more as customers?) are not being similarly fooled.
I’m possibly biased, but I do see that as an instance of an EA-adjacent collaborator failing to put himself in the donors shoes. But I guess it might be an effective ad, so it’s all for the best?
As a civil servant from a developing country, I can say that those estimates mean almost nothing. I don’t think they are well invested, and they are tiny in comparison to adaptation gaps
I think there’s a huge problem of prioritization when it comes to adaptation investment—because developing countries seldom link infrastructure resilience to adaptation policies
I think there’s a relevant distinction to be made between field building (i.e., developing a new area of expertise to provide advice to decision-makers—think about the history of gerontology) and movement building (which makes me think of advocacy groups, free masons, etc.). Of course, many things lie in-between, such as neoliberals & Mont Pelerin Society.
FWD: Invitation to the Future Generations Initiative Launch EventWe are delighted to extend an official invitation to you for the official launch of the Future Generations Initiative, which will take place on February 21, from 16.00 to 18.00 at Atelier 29 - Rue Jacques de Lalaing 29, 1000 Brussels, and will also be streamed online.
To confirm your attendance, kindly fill out this short registration form by Monday, February 19, at 18.00 CET.
There is an urgent need for the EU to embed the rights of Future Generations in its decision-making processes. However, a model for such representation is currently lacking.
A diverse group of NGOs is working together to convince decision makers that the time to act for Future Generations is now. On February 21, we will launch our coalition to promote this important issue as we approach the EU elections and the next political cycle begins.
You can find the event agenda by following this link.
By completing the registration form, you have the option to attend either in person or virtually.
The event will feature the presentation of the Future Generations proposal and policy demands, and a reception will follow.
Please do not hesitate to reach out to marco@thegoodlobby.eu should you have any questions.
Thinking about this one year later, I realize that Global Catastrophic events are much like Carnival in Brazil: unlivable climatic conditions, public services are shut down, traffic becomes impossible, crowds of crazy people roam randomly through the streets… but without Samba and beaches, of course (or, in the case of Curitiba, without zombies selling you beer)
but your song was actually written as an apolyptic message: https://pt.wikipedia.org/wiki/Eva_(can%C3%A7%C3%A3o_de_Umberto_Tozzi)#:~:text=A%20letra%20da%20can%C3%A7%C3%A3o%20mistura%20fic%C3%A7%C3%A3o%20cient%C3%ADfica%20p%C3%B3s%2Dapocal%C3%ADptica%20com%20inspira%C3%A7%C3%A3o%20na%20B%C3%ADblia%20e%20%C3%A9%20narrada%20por%20um%20homem
For more, see this brilliant podcast: https://globoplay.globo.com/podcasts/episode/choque-de-cultura-ambiente-de-musica/7b0c4362-3a28-4ed4-be45-e6786aaba9f9/
How consistent are “global risk reports”?
We know that the track record of pundits is terrible, but many international consultancy firms have been publishing annual “global risks reports” like the WEF’s, where they list the main global risks (e.g. top 10) for a certain period (e.g., 2y). Well, I was wondering if someone has measured their consistency; I mean, I suppose that if you publish in 2018 a list of the top 10 risks for 2019 & 2020, you should expect many of the same risks to show up in your 2019 report (i.e., if you are a reliable predictor, risks in report y should appear in report y+1). Hasn’t anyone checked this yet?
If not, I’ll file this under “a pet project I’ll probably not have time to take in the foreseeable future”
Let me briefly try to reply or clarify this:
I think there is a massive difference between one’s best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).
I did say that I’m not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survival—so yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstad’s toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, I’m assuming it doesn’t make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because I’m assuming that EV will stay the same. This is not always true; I should have clarified this.
Second, it’s not that I don’t see any difference between “1%” vs. “10^-10″; I just don’t take sentences of the type “the probability of p is 10^-14” at face value. For me, the reference for such measures might be quite ambiguous without additional information—in the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s – which, btw, are extrapolated from frequencies (according to the footnotes you provided).
Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; it’s just that, because it’s not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...
It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queen’s next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is “modally farther away”, in a “more distant world”; I think that’s a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same “modal distance”. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets – and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). That’s why, when I’m talking about very different things, I prefer statements like “both probability distributions have the same mean” to claims such as “both events have the same probability”.
Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptions—particularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). I’m not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysis—as you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).
Something that surprised me a bit, but that is unlikely to affect your analysis:
I used Correlates of War’s data on annual war deaths of combatants due to fighting, disease, and starvation. The dataset goes from 1816 to 2014, and excludes wars which caused less than 1 k deaths of combatants in a year.
Actually, I’m not sure if this dataset is taking into account average estimates of excess deaths in Congo Wars (1996-2003, 1.5 million − 5.4 million) - and I’d like to check how it takes into account Latin American Wars of the 19th century.
Thanks for the post. I really appreciate this type of modeling exercise.
I’ve been thinking about this for a while, and there are some reflections it might be proper to share here. In summary, I’m afraid a lot of effort in x-risks might be misplaced. Let me share some tentative thoughts on this:
a) TBH, I’m not very concerned with precise values of point-estimates for the probability of human extinction. Because of anthropic bias, or the fact that this is necessarily a one-time event, the incredible values involved, and doubts about how to extrapolate from past events here, etc., So many degress of freedom, that I don’t expect the uncertainties in question to be properly expressed. Thus, if the overall “true” x-risk is 1% or 0.00000001%, that doesn’t make a lot of difference to me—at least in terms of policy recommendation.
I’m rather more concerned with odds ratios. If one says that every x-risk estimate is off by n orders of magnitude, I have nothing to reply; instead, I’m interested in knowing if, e.g., one specific type of risk is off, or if it makes human extinction 100 times more likely than the “background rate of extinction” (I hate this expression, because it suggests we are talking about frequencies).
b) So I have been wondering if, instead of trying to compute a causal chain leading from now to extinction, it’d be more useful to do backward reasoning instead: suppose that humanity is extinct (or reduced to a locked-in state) by 3000 CE (or any other period you choose); how likely is it that factor x figures in a causal chain leading to that?
When I try to consider this, I think that a messy unlucky narrative where many catastrophes concur is at least on a pair with a “paperclip-max” scenario. Thus, even though WW 3 would not wipe us out, it would make it way more likely that something else would destroy us afterwards. I’ll someday try to properly model this.
Ofc, I admit that this type of reasoning “makes” x-risks less comparable with near-termist interventions—but I’m afraid that’s just the way it is.
c) I suspect that some confusions might be due to Parfit’s thought-experiment: because extinction would be much worse than an event that killed 99% of humanity, people often think about events that could wipe us out once and for all. But, in the real world, an event that killed 99% of humanity at once is way more likely than extinction at once, and the former would probably increase extinction risk in many orders of magnitude (specially if most survivors were confined to a state where they would be fragile against local catastrophes). The last human will possibly die of something quite ordinary.
d) There’s an interesting philosophical discussion to be had about what “the correct estimate of the probability of human extinction” even means. It’s certainly not an objective probability; so the grounds for saying that such an estimate is better than another might be something like that it converges towards what an ideal prediction market or logical inductor would output. But then, I am quite puzzled about how such a mechanism could work for x-risks (how would one define prices? well, one could perhaps value lives with the statistical value of life, like Martin & Pyndick).
Thanks for this report. It’ll be quite useful.
I’d like to share some critical remarks I had previously sent RCG by e-mail:Definition of “RCG”
<Los RCG se definen como aquellos con el potencial de infligir un daño grave al bienestar humano a escala global. > (p.2; cf. p. 6)
This definition might be too wide – it could include the global financial crisis of 2008, for instance. It is constrained, though, by the subsequent sentence: <Si bien se han identificado diversos riesgos que cumplen con esta definición, el presente trabajo se enfoca en los riesgos asociados a la inteligencia artificial, los riesgos biológicos y los escenarios de reducción abrupta de la luz solar.>
However, afterwards, a lot of the material is based on scientific diplomacy, and preparedness for local disasters and insurance that is not directly related to these types of events. But then, it’s not clear why other risks are not considered, such as the threat of conflict, extreme global warming, or other risks with cascading effects. They provide historical examples of catastrophes; an ERALs like the Tambora eruption (1815-16) caused “the year without summer”, but didn’t kill more than 250k people; the ENSO event of 1876-79 killed around 30-50 million people (s. Our World in Data; https://doi.org/10.1175/JCLI-D-18-0159.1).
Also, I don’t get what you mean by “seguros por riesgos catastróficos” (p. 8); if you mean insurance for local disasters, sure, people should buy it more often, there’s probably a market failure… on the other hand, there is also significant moral hazard here: people will often fail to avoid risky regions because they are insured. However, if you mean RCG insurance… I really don’t know this could work, as no current insurance system could be expected to survive such a loss in global output – but it would interesting to explore some possible arrangements in depth[1].
2.Things I missed most:
a) More emphasis on geographical and economic aspects of Latin America
According to Wikipedia, Latin America has 656 million people, 20,000 km2, and a combined nominal GDP of US$5.188 trillion and a GDP PPP of US$10.285 trillion; but more than half of it is in Mexico and Brazil, which amount to 350 million people, 10,500 km2, and a combined GDP of aprox. US$4 trillion. And yet, they barely show up in the assessment; Brazilian policies are totally absent from appendix II.
Also, I think the report would have greatly benefitted from an assessment of the state capacity and fiscal space in Latin American countries (perhaps you considered it unnecessary, as it is taken into account by the INFORM index and by Dahl’s GCR Index?)
b) Historical examples of relevant disasters:
Such as Grande Seca—Wikipedia (included in the ENSO event of 1876-79), Haiti’s earthquake and cholera epidemics, Andean seismic and volcanic events, etc.3. Outdated reference?
“Se estima que los daños por desastres en América Latina y el Caribe han ascendido a unos US$20.000 millones anuales en una década, con más de 45.000 muertes y 40 millones de personas afectadas (Kiepi e Tayson, 2002)”.
Could we find a source more up to date? This is from more than twenty years ago, when the region’s GDP and population was quite smaller. By way of comparison, the National Confederation of Municipalities in Brazil estimates that natural disasters have caused losses of R$ 400 billion (US$S80 billion) in the last decade in Brazil alone (more conservative estimates put that value around half of this). If that sounds like a lot, consider that Newman and Noy (2023) estimate that global warming alone causes US$143 billion in damage per year in the world (of which 63% refers to the value of deaths), and that Latin America accounts for 8.4% of the world’s population and 7.5% of GDP – from which we could expect at least US$7 billion to US$13 billion of annual damage in the region just because of global warming.[1] What one usually wants from an insurance scheme is: a) pool the risk between different agents, and b) internalize ex ante the costs of risks, and c) hedge or protection against uncertain events. There are some proposed mechanisms along these lines: i) World Climate Bank (Broome & Foley, 2016); (ii) the Glasgow Loss and Damage Mechanism; iii) Cotton-Barratt’s proposal of insurance for dual-use pathogen research; etc.
Opportunity for Austrians
Article by Seána Glennon: “In the coming week, thousands of households across Austria will receive an invitation to participate in a citizens’ assembly with a unique goal: to determine how to spend the €25 million fortune of a 31-year-old heiress, Marlene Engelhorn, who believes that the system that allowed her to inherit such a vast sum of money (tax free) is deeply flawed.”
T20 Brasil | T20 BRASIL CALL FOR POLICY BRIEF ABSTRACTS: LET’S RETHINK THE WORLD
The T20 Brasil process will put forward policy recommendations to G20 officials involved in the Sherpa and Finance tracks in the form of a final communiqué and task forces recommendations.
To inform these documents, we are calling upon think tanks and research centres around the world – this invitation extends beyond G20 members – to build and/or reach out to their networks, share evidence, exchange ideas, and develop joint proposals for policy briefs. The latter should put forward clear policy proposals to support G20 in addressing global challenges.
Selection criteria
Policy briefs must be related to the 36 subtopics that have been selected based on (i) the suggestions received from more than 100 national and foreign think tanks and research centres that have already expressed their interest in engaging with the T20 Brasil process and activities and (ii) the three priorities spelt out by the G20 Brazil presidency. These subtopics are organised under the six Task Forces themes.
For me it’s hard to believe that companies will spend much more with compliance thatn what they are already spending with marketing and offsets to greenwash their reputations. And when we implement carbon taxes / markets, they’ll need to disclose that info anyway
you’re already a record-breaker in my heart