Hi, I’m Florian. I am enthusiastic about working on large scale problems that require me to learn new skills and extend my knowledge into new fields and subtopics. My main interests are climate change, existential risks, feminism, history, hydrology and food security.
FJehn
Thanks, appreciated!
Putting your plot here in the comment, so others don’t have to go through the spreadsheet:
What did you group into scientific research?
Curious to see this, because it does not map at all to what seems to be happening in the broader GCR space. ALLFED had to massively downsize, CSER has also gotten smaller, GCRI has gotten smaller, FHI has ceased to exist (though not due to funding explicitly). So, how does this map to the funding staying constant for non-AI things? Where is this funding going to, if it clearly does not end up with the most well known GCR orgs?
Thanks for making this! Is there a more detailed breakdown for the longtermist category? I have the suspicion that there was also a massive shift in this category in the last few years: most money went to AI, there are some breadcrumbs for pandemics and pretty much nothing for the rest. While in the time before this was more balanced. Would be curious to see if this is true.
I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don’t expect this to change if the focus stays this way.
With the satellites I understood it as they being disrupted in several ways:
Their signal gets garbled, but they remain fine
Their electronics get fried
The increased drag in the atmosphere leads to them being de-orbited
What ultimately happens depends a lot on the orbit and how hardened the satellite is, but I haven’t seen research that tries to assess this in detail (but also haven’t looked very hard for this particular thing).
About the airplanes: Yeah this might be an option, though I think the paper that mentioned this said something along the lines “it is quite hard to predict where in the airplanes path the radiation will increase and they can receive the radiation quickly, which makes this hard to avoid”.
Yeah, I share that worry. And from experience it is really hard to get funding for nuclear work in both philanthropy and classic academic funding. My last grant proposal about nuclear was rejected with the explanation that we already know everything there is to know about nuclear winter, so no need to spend money on research there.
Hard to pin down exact numbers, but yeah 10-20 % (and maybe a bit more) seem plausible to me, especially if we end up in higher temperatures. I would expect global tensions to be much higher in a high warming world. Especially, between Indian and Pakistan.
I meant specifically mentioning that you don’t really fund global catastrophic risk work on climate change, ecological collapse, near-Earth objects (e.g., asteroids, comets), nuclear weapons, and supervolcanic eruptions. Because to my knowledge such work has not been funded for several years now (please correct me if this is wrong). And as you mentioned that status quo will continue, I don’t really see a reason to expect that the LTFF will start funding such work in the foreseeable future.
Thanks for wanting to check in if there is a difference between the public grants and the application distribution. Would be curious to hear the results.
Thanks for the clarification. In that case I think it would be helpful to state on the website that the LTFF won’t be funding non AI/biosecurity GCR work for the foreseeable future. Otherwise you will just attract applications which you would not fund anyway, which results in unnecessary effort for both applicants and reviewers.
Ah okay get it. Have you considered asking those on Metaculus? Maybe you could get a rough ballpark there. But I am not aware of anything like this in peer reviewed research.
Hey Vasco. Haven’t seen anything like this. But are talking about a probability estimates across all GCRs at once? My guess would be that the uncertainties would be so large, that it would not really tell you anything.
Now that this paper is finally published, it feels a bit like a requiem to the field. Every non-AI GCR researcher I talked to in the last year or so is quite concerned about the future of the field. A large chunk of all GCR funding now goes to AI, leaving existing GCR orgs without any money. For example, ALLFED is having to cut a large part of their programs (https://forum.effectivealtruism.org/posts/K7hPmcaf2xEZ6F4kR/allfed-emergency-appeal-help-us-raise-usd800-000-to-avoid-1), even though pretty much everyone seems to agree that ALLFED is doing good work and should continue to exist.
I think funders like Open Phil or the Survival and Flourishing Fund should strongly consider putting more money into non-AI GCR research again. I get that many people think that AI risk is very imminent, but I don’t think that this justifies to leave the rest of GCR research dying on the vine. It would be quite a bad outcome if in five years AI risk did not materialize, but most of the non-AI GCR orgs have ceased to exist, as all of the funding dried up.
Thanks for the explanation.
Yeah I tried Connected Papers, as well das Research Rabbit, but somehow they never turn out to be super helpful. Do you have a specific strategy when you use them?
Could you elaborate what you mean with 2) ? What reference manager are you using?
What was the criticism of the university? I would have been pretty happy if my bachelor students would have been able to cobble something like this together.
Yes I think posting it on a preprint server would be worth your time. As long as this stays an EA Forum post or a thesis hidden in a university archive no one can take a look at it. If you put it on a preprint server other people can find and reference it, if they find it helpful. Worst case that can happen is that nobody will built on it, but also the cost of putting it on a preprint server are essentially zero and if it stays an EA Forum post that chances that somebody uses this are much lower.
Pretty interesting stuff. If these are your “rough drafts” then your polished papers must be wild.
Have you considered putting this on a preprint server (e.g. https://eartharxiv.org/), so others can properly cite it?
Also, you might want to use another projection for your maps. I found that Winkel Triple works better if you want to display such global indices.
Hadn’t thought of that, but yeah that does indeed sound like another large problem. I hope that everything turns out well for your project in the end!
Thank you!
For Figure 3 you have to keep in mind that this is biomass. This does not necessarily mean it could be eaten, as it also includes things like crop residues, which probably make up a good chunk of that arrow.
The paper (https://www.sciencedirect.com/science/article/pii/S0308521X16302384?via%3Dihub) also includes some other interesting plots, but unfortunately not the ones you would like to see.
Given the results from Wand and Hoyer, I would expect it to just take time. It seems a pretty consistent pattern that many civilizations increase in complexity over time once they have adapted agriculture. Their scale-up takes around 2500 years and then plateaus. Also many of those complexity developments happened completely independently, e.g. China, the Incas and Egypt.
Tangentially related: Effektiv Spenden created a donation fund for interventions that strengthen democracy. However, so far it only focuses on Germany.
Good to know. Thanks.