Hi, I’m Florian. I am enthusiastic about working on large scale problems that require me to learn new skills and extend my knowledge into new fields and subtopics. My main interests are climate change, existential risks, feminism, history, hydrology and food security.
FJehn
Yeah good point. I’ll probably do it differently if I revisit this next year.
Yeah fair enough. I personally, view the Robock et al. papers as the “let’s assume that everything happens according to the absolute worst case” side of things. From this perspective they can be quite helpful in getting an understanding of what might happen. Not in the sense that it is likely, but in the sense of what is even remotely in the cards.
Just a side note. The study you mention as especially rigorous in 1) iii) (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331) was made in Los Alamos Labs, an organization who job it is to make sure that the US has a large and working stockpile of nuclear weapons. It is financed by the US military and therefore has a very clear inventive to talk down the dangers of nuclear winter. Due to this reason this study has been mentioned as not to be trusted by several well connected people in the nuclear space I talked to.
An explanation of why it makes sense to talk down the risk of nuclear winter, if you want to have a working deterrence is describe here: https://www.jhuapl.edu/sites/default/files/2023-05/NuclearWinter-Strategy-Risk-WEB.pdf
What exactly confused you about the code? It only strips down the names and counts them.
That the publications by someone are under counted makes sense, given how TERRA works, as likely not all publications are captured in the first place and probably not all publications were considered existential risk relevant. When I look at Bostrom’s papers I see several that I would not count as directly x-risk relevant.
Where exactly did you find the number for Torres? On their own website (https://www.xriskology.com/scholarlystuff) they have listed 15 papers and the list only goes to 2020. Since then Torres published several more papers, so this checks out.
I personally did not exclude any papers. I simply used the existing TERRA database. Interestingly, the database only contains one paper by Whittlestone. Seems like the current key words used by TERRA did not catch Whittlestone’s work. So, yes this is an undercount.
Exactly, this only counts the number.
Thanks for the kind words (sometimes feels like those are hard to come by in the forum).
Good point. Changed the title accordingly.
“can’t really draw much in the way of conclusions from this data” seems like a really strong claim to me. I would surely agree that this does not tell you everything there is to know about existential risk research and it especially does not tell you anything about x-risk research outside classic academia (like much of the work by Ajeya).
But it is based on the classifications of a lot of people on what they think is part of the field of existential risk studies and therefore I think gives a good proxy on what people in the field think what is part of their field. Also, this is not meant to tell you that this is the ultimate list, but as stated in the beginning of the post, as a way to give people an overview of what is going on.
Finally, I think that this surely tells you something about the participation of women in the field. 1 out of 25 is really, really unlikely to happen by chance.
From what I have seen on TERRA I think this is almost all peer reviewed, but from time to time a preprint, non peer reviewed book or similar things slip in.
TERRA is based on Scopus.
I’ve read it now and it was quite interesting. Though it did not really shift my conclusions. The only update I had was that we might even know less about the long term consequences (2100+) than I thought before.
I think that tipping elements could make a significant contribution to the destabilization of global civilization, which ultimately could contribute to collapse. This would likely not happen via temperature, but by other disruptive elements like significant sea level rise or destruction of ecosystem. However, the main effects of this are likely beyond 2100. Therefore, I am really unsure how this will ultimately play out. I think to make a good estimate of this we are currently knowing too little. Hopefully, the next special report of the IPCC will be about this. This would make things likely much clearer. Therefore, I’ll probably not investigate this much further right now, as things seem to uncertain.
Hmm I feel like this is already a lot of line breaks. Most of the paragraphs are only ~ 5 sentences or less.
And at least for me bolding breaks the reading flow.
I used table S4, which includes a longer list of possible tipping points.
Just did a quick calculation. If you assume the minimal value as the trigger, you get ~0.61°C additional warming at 3°C warming.
Also, a lot more of the points are triggered at lower warming than this.
Yes, that is how I would interpret their Table S4, which seems like the main summary of their findings.
What was your impression of how the media represented their findings? It feels to me like the media often represents tipping points as happening instantaneous, while most of them are rather in the time scale of centuries.
However, you could make an argument that staying much below that is also sensible, as the tipping points are not only triggered by temperature, but also by physical processes like the dilution of salt concentrations in sea water.
Also, for writing this section I used the estimated values. If you use the minimal values for triggering the tipping points the picture becomes more grim.
Thank you for the recommendation! I’ll read through it and update the permanent version on Github.
Hi Corentin. Thanks for the comments. I plan to also look more into biodiversity and societal tipping points, but I haven’t yet found the time.
Concerning the reformating, maybe it’s just me, but I have a much harder time reading those executive summary style posts and therefore I would rather leave it the way it is.
This podcast episode feels like something out of a different timeline after the rough time EA has gone through since then. Would be very curious to hear if the opinions of the things said in the podcast are considerably different now?
Surely, they are more modern than utilitarianism. Utilitarianism has been developed in the 19th century, while all the other ones mentioned are from the 20th century. And it is not their “novelty” which is interesting, but that they are a direct follow up and criticism of things like utilitarianism. Also, I don’t think that post above was an endorsement of using fascism, but instead a call to understand the idea why people even started with fascism in the first place.
The main contribution of the above mentioned fields of ideas to EA is that they highlight that reason is not a strong tool, as many EA think it is. You can easily bring yourself into bad situation, even if you follow reason all the way. Reason is not something objective, but born from your standpoint in the world and the culture you grow up in.
And if EA (or you) have considered things like existentialism, structuralism, post-structuralism I’d love to see those arguments why it is not important to EA. Never seen anything in this regard.
It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side. I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an “objective” calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified.
This is just another data point that the existential risk field (like most EA adjacent communities) has a problem when it comes to gender representation. It fits really well with other evidence we have. See, for example Gideon’s comment under this post here: https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of?commentId=vt36xGasCctMecwgi
While on the other hand there seems to be no evidence for your “men just publish more, but worse papers” hypothesis.