Thank you for you recent post and your ALLFED feedback.
I have made my request for such publicly so also responding publicly, as such openness can only be beneficial to the investigation and advancing of the causes we are passionate about.
We appreciate your view of ALLFED’s work being of “decent quality, helpful to many and made with well-aligned intention”.
We also appreciate many good points raised in your feedback, and would like to comment on them as follows.
As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED’s work to be of comparatively lower importance than the other areas I tend to recommend grants in
People’s intuition on the long-term future impact of these type of catastrophes and the tractability of reducing that impact with money varies tremendously.
One possible mechanism for extinction from nuclear winter is as follows. It is tempting to think that if there is enough stored food to keep the population alive for five years until agriculture recovers, that 10% of people will survive. However, if the food is distributed evenly, then everyone will die after six months. It is not clear to me that the food will be so well protected from the masses that many people will survive. Similarly, there could be some continuous food production in these scenarios if managed sustainably such as fish that could relocate to the tropics. However, again, if there are many desperate people, they might eat all the fish, so everyone would starve. Similarly, hunter gatherers generally don’t have stored food and could starve. Even if agrarian societies managed to have some people survive on stored food, if there were collapse of anthropological civilization, the people might not be able to figure out how to become hunter gatherers again. Even if there is not extinction, it is not clear we would recover civilization, because we have had a stable climate the last 10,000 years and we would not have easily recoverable fossil fuels for industrial civilization. And even if we did not lose civilization, worse values from the nastiness of the die off could result in totalitarianism or end up in AGI (though you point out that it is possible we could be more careful with dangerous technologies the second time around).
As for the tractability, people have pointed out that many of the interventions we talk about have already been done at small scale. So it is possible that they would be adopted without further ALLFED funding (and we have a parameter for this in the Guesstimate models). However, there is some research that takes calendar time and cannot be parallelized (such as animal research). Furthermore, if there is panic before people find out that we could actually feed everyone, then the chaos that results probably means the interventions won’t get adopted.
Given the large variation in intuitions, we have tried to do surveys to get a variety of opinions. For the agricultural catastrophes (nuclear winter, abrupt climate change, etc.) we got eight GCR researcher opinions. The result varied nearly four orders of magnitude. The most pessimistic found marginal funding of ALLFED now the same order of magnitude cost effectiveness as AI at the margin, the most optimistic four orders of magnitude higher cost effectiveness than AI (considering future work that will likely be done). I know you in particular are short on time, but I would encourage anyone interested in this issue to put their own values into the blank model (to avoid anchoring) and see what they produce for agricultural catastrophes. Of course even if it does not turn out to be more cost-effective than AI, it still could be competitive with engineered pandemic.
This particular EA Long Term Future Fund application was focusing on a different class of catastrophes, those that could disrupt electricity/industry (including solar storm, high-altitude electromagnetic pulses, or narrow AI computer virus). In this case, a poll was taken at EAG San Francisco 2018, so the data are less detailed. There appears to be fewer orders of magnitude variation in this case. Since the mean cost-effectiveness ratio to AI is similar, this likely would yield the most pessimistic person judging preparations for losing electricity/industry at the margin to be more cost-effective than AI. Again, here is a blank model for this cause area.
Most of ALLFED’s work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path.
As for the specific path to optimize for improving the long-term future, in the book Feeding Everyone No Matter What, we did go through a number of problems associated with nuclear winter and food shortage was clearly the most important (and this has been recognized by others, including Alan Robock). However, for catastrophes that disable electricity/industry, it is true that issues such as water, shelter, communications and transportation are very important, which is why we have developed interventions for those as well.
I have some hesitations about the structure of ALLFED as an organization. I’ve had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I’ve had with Finan who used to work at ALLFED, but left because of it being located in Alaska).
This has been an interesting one for both myself and the team to consider.
One of the unique features of ALLFED is our structure which does correspond to our work on *both* research and preparedness. As such, we have opted for a small, flexible multi-location organization, which allows us to get to places and collaborate globally.
While I am myself indeed based in Alaska, we also have a strong UK team based in London and Oxford, busy developing collaborations with academia (e.g. UCL), finance and industry and attending European events (just back from Geneva and the United Nations Global Platform for DRR and heading to Combined Dealing with Disasters International Conference next month). As for attracting talent, we have built alliances with researchers at Michigan Technological University, Penn State, Tennessee State University, and the International Food Policy Research Institute who are ready to do ALLFED projects once we get funding. This is why our room for more funding in the next 12 months is more than $1 million. We have also co-authored papers with people at CSER, GCRI, and Rutgers University.
Overall, we feel the geographical spread has been beneficial to us and has certainly contributed to a greater diversity within the team and allowed access to a greater body of knowledge, contacts, connections. As a sideline, we feel all individuals with passion for the GCR work and with relevant talents should be able to contribute to it, regardless of their location, family/personal demands or physical abilities. Facilitating and enabling this via remote working has seemed an obvious benefit to the organization and the right thing to do.
We have read this EA forum post on local/remote teams with great interest and find its conclusions and recommendations consistent with our experience. Working across continents has certainly contributed to the development of robust internal organizational structures, clarity in goals, objectives, accountability, communications and such.
As for my personal experience of being based in Alaska, I don’t feel that my interaction with the team here has been significantly different than with remote team members (referring back to this: the people in Alaska are not in the same hallway, though we do have in-person meetings). So basically we can recruit students for projects that are routed through the University, but then other researchers can be remote.
The exceptions of course are if an experiment requires significant facilities and is not done by a student (as was the case with Finan) or if one’s personal preferences are for more social interactions.
We are of course concerned and have noted your comment on “relatively bad experiences interacting with some parts of (our) team”. We would very much like to learn more about this (if you don’t mind perhaps in private this time, to ensure people’s privacy/confidentiality). We cannot help but wonder whether our commitment to diversity—including neurodiversity—may have had some unintended consequences… We do have individuals on the team whose communications needs and style may at times present something of a challenge, particularly to those unaware of such considerations. Thank you for alerting us of possible impacts of this; we will certainly look at this, and any other “team interactions” matters, and see how they can be managed better. We are hopeful that, overall, there have been many more positive interactions than dubious ones and would like to take this opportunity to thank you (and anybody else who may have experienced issues around this) for your patience and understanding.
Going forward—and this relates as much to this particular response and any future ALLFED team interaction at all, with anyone reading this—if any such interaction does not quite work out, please let me know (so we may either make good or provide context).
All in all, we are grateful for your feedback and pleased with our decision to engage in this publicly. Hopefully this will be of use not only to ALLFED as an organization but to the broader EA community.
This is very helpful to see your reasoning and cruxes. I reply to the ALLFED related issues above, but I thought I would reply to the pandemic issue here. Here is one mechanism that could result in greater than 90% mortality from a pandemic: multiple diseases at the same time: multipandemic.
Very interesting! Is the dung beetle fecundity two per female? How can the population ever grow?
Great to hear the Facebook group is inclusive!
That’s right—you can see more discussion here. That’s why nuclear war and extreme climate change can be considered existential risks.
Thanks for the list. I was excited to see engineering as a category, but then I found out it actually means software engineering. It would be very helpful to other types of engineers if EAs would specify software engineering if that’s what they mean. There are opportunities for the other types of engineering within EA, including plant-based and clean meat, climate change and food for catastrophes, with a number of effective theses here.
Very interesting! This highlights a number of issues. They mention 2% of GDP is charity. But I believe not all GDP shows up as gross household income. And typically EAs use pretax income (adjusted gross income in the United States), which is lower than gross household income. Some surveys use “disposable income”, which is probably even lower than pretax income. So there could easily be a factor of two difference here, and indeed this study found 3.6% average giving (though it was only of people with household income greater than $80,000 per year). There is also the question of whether mean % donations should be person-weighted or donation-weighted (the latter would agree with the GDP number better). But in other studies, I think I’ve seen that even in low income groups, average giving is still over 1%. Some have even claimed that higher income people give a lower percent of their money, but I am skeptical of this. So I’m not sure what’s going on here.
Thanks for adding ALLFED! Nitpick, but if you want to sound impressive, you can say that a steep learning curve actually means much learning over a short period of time.
I find it unlikely that we would export wild-animal suffering beyond our solar system. It takes a lot of time to move to different solar systems, and I don’t think future civilizations will require a lot of wilderness: it’s a very inefficient use of resources. So I believe the amount of suffering is relatively small from that source. However, I think some competitive dynamics between digital beings could create astronomical amounts of suffering, and this could come about if we focus only on reducing extinction risk.
Agreed, but the other possibility is that there will be simulations of wild animals in the future. So I think spreading the meme that wild animals can suffer to the AI community could be valuable.
Thanks for this great list! It seems like the 80,000 Hours annual review and the how big of a tent EA should be should be separate posts, because I think they would spark significant discussion. However, it seems like the people who contributed to/discovered these should get the karma.
Welcome to the EA forum. I’m guessing you are confused because you asked an innocuous question and got a bunch of downvotes. I did not downvote, but I can say that my initial reaction is that in the US, one typically needs to pay to recycle electronics, and I’ve always thought there would be better uses for that money. But donating electronics to a charity that could perhaps distribute them to people in less-developed countries seems like it could be positive.
The safety bicycle (two gears and a chain) came in only 1885, long after trains. But the roller power chain was invented by da Vinci hundreds of years earlier and not adopted.
I am not an artist, but it seems like visual art could illustrate scope insensitivity and neglectedness. For instance, represent a relatively small amount of current lives and a huge amount of money going towards them, and then a huge number of future lives in a very small amount of money going towards them. Similarly with pets versus livestock (like ACE’s graphs posted about recently on the forum). Poverty would be a little more difficult, but maybe one could use the number of people in developed countries making under $10 a day and the amount of money that flows towards them, versus the number of people in less-developed countries making under $10 a day and the amount of money that flows towards them.
We might point out that given the reality of climate change, the choice is suicidal—it’s not possible for everyone to live like Americans
This is not only possible with future technology, but it is feasible with present technology without taking more land from nature. Renewable energy/nuclear, agricultural productivity already realized in Europe, growing seaweed (for food, feed, and carbon sequestration), not building buildings out of wood, recycling, etc.
Thanks! I was thinking environmentalism in the 1960s might have grown 100% per year from very niche to broad support. Of course the bar for considering oneself as an environmentalist is much lower than EA, basically consisting of recycling and saying one supports clean air and water.
I argued in my 80,000 Hours podcast that there might be something to a separate component of urgency. We generally say cost-effectiveness is something like total increase in utility per dollar, not time discounted. This can be worked out for AI and alternate foods, which we have done here. Let’s say they were the same cost-effectiveness, so we should be putting money into both of them. However, because there is a higher probability that agricultural catastrophes would happen in the next 10 years than AI, the optimal course of action is to spend more of the optimal amount of money on alternate foods in the next 10 years than AI. A way of thinking about this is that the return on investment of alternate foods is significantly higher. And we might even be able to monetize that return on investment by making a deal with a goverment and then have more money to spend on AI.This logic applies for climate disasters that could happen soon, like coincident extreme weather causing floods or droughts on multiple continents. However, I don’t think it applies to tail risk of climate change (greater than 5°C global warming) because that could not happen soon. Of course one could argue that we should act now to reduce climate tail risk. However, if there are many other things we can do to increase welfare with a higher return on investment, we should do those things first. And then we will have more money to deal with the problem, such as paying for expensive air removal of CO2.
I hope that the slowdown is due to the high fidelity model. But I am concerned that it might be that we are getting closer to saturation, and following a sigmoidal curve. If you count all the media impressions for EA, I think it would be more like tens of millions (and many predisposed people have sought it out online already). Various people have posited the 1% of developed countries becoming EA. At points I have been even more optimistic by recognizing that more than 10% of people take 10% salary cut working for nonprofits or the government. However, for most people, there is a big psychological difference between taking a 10% pay cut and donating 10% (and there are other factors comparing jobs). Furthermore, you need not just effort or sacrifice, but to actually prioritize effectiveness. I am concerned that the coincidence of these two characteristics is relatively low. I think that we can continue to get growth by continually exposing new college students, hopefully in more colleges, and also by recruiting better in groups underrepresented in EA. But that probably won’t produce the strong exponential growth of the past of EA. Has anyone done comparisons with say environmentalism or feminism? Because it seems like for them to have achieved such high penetration, they would have done something like doubling every year for decades.
Perhaps I’ve been fortunate with not having a lot of crashes over my 19 years of using it. As for software compatibility, sometimes I have to open a dictation box (which is what I’m doing right now). As for the learning curve, if you want to do everything with voice, there is a lot to learn. But if you are just using it for sentences like I am, you only need to learn a few commands (and remember to dictate punctuation). If one is not a touch typist, I would think that one could be faster with voice in a few hours, and if someone is a typical touch typist, then maybe faster than voice in a few days?
Thanks for the useful post. Occupational therapy (U.S.) is what solved my wrist problem. But I still use Dragon NaturallySpeaking because it is over 100 words per minute even with correction time for at least a sentence (assuming you don’t have an accent it does not support).