Hi, I’m Florian. I am enthusiastic about working on large scale problems that require me to learn new skills and extend my knowledge into new fields and subtopics. My main interests are climate change, existential risks, feminism, history, hydrology and food security.
FJehn
With the satellites I understood it as they being disrupted in several ways:
Their signal gets garbled, but they remain fine
Their electronics get fried
The increased drag in the atmosphere leads to them being de-orbited
What ultimately happens depends a lot on the orbit and how hardened the satellite is, but I haven’t seen research that tries to assess this in detail (but also haven’t looked very hard for this particular thing).
About the airplanes: Yeah this might be an option, though I think the paper that mentioned this said something along the lines “it is quite hard to predict where in the airplanes path the radiation will increase and they can receive the radiation quickly, which makes this hard to avoid”.
Auroras, space weather and the threat to critical infrastructure
Yeah, I share that worry. And from experience it is really hard to get funding for nuclear work in both philanthropy and classic academic funding. My last grant proposal about nuclear was rejected with the explanation that we already know everything there is to know about nuclear winter, so no need to spend money on research there.
Hard to pin down exact numbers, but yeah 10-20 % (and maybe a bit more) seem plausible to me, especially if we end up in higher temperatures. I would expect global tensions to be much higher in a high warming world. Especially, between Indian and Pakistan.
Manipulating the global thermostat: Climate change, nuclear winter, and stratospheric aerosol injections
I meant specifically mentioning that you don’t really fund global catastrophic risk work on climate change, ecological collapse, near-Earth objects (e.g., asteroids, comets), nuclear weapons, and supervolcanic eruptions. Because to my knowledge such work has not been funded for several years now (please correct me if this is wrong). And as you mentioned that status quo will continue, I don’t really see a reason to expect that the LTFF will start funding such work in the foreseeable future.
Thanks for wanting to check in if there is a difference between the public grants and the application distribution. Would be curious to hear the results.
Thanks for the clarification. In that case I think it would be helpful to state on the website that the LTFF won’t be funding non AI/biosecurity GCR work for the foreseeable future. Otherwise you will just attract applications which you would not fund anyway, which results in unnecessary effort for both applicants and reviewers.
[Question] Does the Long-Term Future Fund provide funding for anything besides AI and biosecurity?
The people’s history of collapse
Ah okay get it. Have you considered asking those on Metaculus? Maybe you could get a rough ballpark there. But I am not aware of anything like this in peer reviewed research.
Hey Vasco. Haven’t seen anything like this. But are talking about a probability estimates across all GCRs at once? My guess would be that the uncertainties would be so large, that it would not really tell you anything.
Now that this paper is finally published, it feels a bit like a requiem to the field. Every non-AI GCR researcher I talked to in the last year or so is quite concerned about the future of the field. A large chunk of all GCR funding now goes to AI, leaving existing GCR orgs without any money. For example, ALLFED is having to cut a large part of their programs (https://forum.effectivealtruism.org/posts/K7hPmcaf2xEZ6F4kR/allfed-emergency-appeal-help-us-raise-usd800-000-to-avoid-1), even though pretty much everyone seems to agree that ALLFED is doing good work and should continue to exist.
I think funders like Open Phil or the Survival and Flourishing Fund should strongly consider putting more money into non-AI GCR research again. I get that many people think that AI risk is very imminent, but I don’t think that this justifies to leave the rest of GCR research dying on the vine. It would be quite a bad outcome if in five years AI risk did not materialize, but most of the non-AI GCR orgs have ceased to exist, as all of the funding dried up.
The state of global catastrophic risk research
A list of lists of large catastrophes
Thanks for the explanation.
Yeah I tried Connected Papers, as well das Research Rabbit, but somehow they never turn out to be super helpful. Do you have a specific strategy when you use them?
Could you elaborate what you mean with 2) ? What reference manager are you using?
How to write a living literature review: Finding papers, coming up with ideas, writing and publishing
What was the criticism of the university? I would have been pretty happy if my bachelor students would have been able to cobble something like this together.
Yes I think posting it on a preprint server would be worth your time. As long as this stays an EA Forum post or a thesis hidden in a university archive no one can take a look at it. If you put it on a preprint server other people can find and reference it, if they find it helpful. Worst case that can happen is that nobody will built on it, but also the cost of putting it on a preprint server are essentially zero and if it stays an EA Forum post that chances that somebody uses this are much lower.
Pretty interesting stuff. If these are your “rough drafts” then your polished papers must be wild.
Have you considered putting this on a preprint server (e.g. https://eartharxiv.org/), so others can properly cite it?
Also, you might want to use another projection for your maps. I found that Winkel Triple works better if you want to display such global indices.
I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don’t expect this to change if the focus stays this way.