Collaborators Wanted: Could war disrupt EA orgs in the US or UK in the next 10 years?
The effective altruism movement needs to be disaster resistant. That requires information we can use to put probabilities on potential problems that have severe consequences. Even a small chance of a large disruption to organisations that are saving so many lives is worth some of my hours. Therefore, I’ve been gathering information on the probability of war in America and the UK to see whether that probability is small or not. I’ve been asking around and haven’t found anyone else in EA who has collected the available data on this so far.
If you or anyone you know has related information or projects, let’s not duplicate each other’s work. Let’s collaborate!
To check out the project or exchange contact info, please send me a PM.
Edit Two: I now suspect that the audience on this website expects the content to be more like a publication than a message board. I think they want posts to resemble something between a finished blog article and a study. I think I was confused because I’m used to the word “forum” describing an Internet discussion forum, which is much more casual. If I still don’t quite seem to get it, I’d appreciate an explanation. I need to find out whether anyone else has done similar projects and locate collaborators who may have information sources I’m not aware of. If you think there’s a way to present something that’s in progress for the purpose of finding existing projects and collaborators, I’d like to know how to present it. Thanks.
Edit One: I removed anything from this post that could give people the idea that I should be supplying data as it’s way too early to share my work at this stage. I’m looking for anyone who has already made progress on this or similar projects to be sure I don’t duplicate the work. I’ve been taking time away from doing more freelance work to research this because this project is not yet funded. I don’t want to waste a bunch of time duplicating work that might already be out there.
It would be helpful when evaluating this project to see some of the work you’ve already done.
Yeah, I’m potentially interested but would be curious what direction you’re thinking of going here.
I’m open to going in whatever direction gives the EA community the most insight into the truth, with whatever presentation encourages the most constructive use of that information. In case you’re interested in specifics, I am currently working on a planning document about how specifically to accomplish all that. I can give you access if you wish (Just give me your Google Docs address via PM.).
I’m open to considering directions / direction changes. What are your thoughts so far? :)
I am not sure if you are requesting to see the project, or if you are making a complaint of some sort. It’s easy enough for anyone to PM me and request to see the project. Just in case, I updated my post to explicitly invite people to PM me to see the project.
In case this wasn’t clear, the project isn’t finished yet. Before dumping a lot more hours into it, I want to see whether I’m duplicating anyone’s work.
The fact that it is not yet finished is why I did not publish anything about it so far. It’s not ready to be published.
The main point of this post is simply to find out whether there are others doing a similar project, and find other people who are interested in helping make sure the project gets completed.
You’ve described a project at a fairly high level of abstraction. You’ve already put 20-40 hours in, so your research has already likely taken some specific directions. Sharing a brief summary of this would help people with compatible approaches who think you’re doing potentially overlapping work notice that they should reach out to you. It would also help save the time of those who aren’t members of that group.
Peter just suggested you mention more details about the project, in the comments. Daniel did too. As a reader, I would have benefited if you’d replied by giving them details about the project. I expect there are more readers like me, who might reach out if a project seemed like it was going in an interesting direction (even if not my preferred direction), but not without such a specific reason to think it’s worth their time.
If there are specific reasons for discretion, of course, you can say so.
I think you’re saying “There isn’t enough information for most readers to decide whether they want to PM you.” is that right?
Yes
Okay, what information do you think they need? You mentioned “directions” and “approaches” but that is very vague. I need the specific questions you think readers need answered before they will notify me of similar projects or express interest in what I’m doing.
There is roughly 0.02-7% chance per year of accidental full-scale nuclear war between US and Russia: source. Since NATO says an attack on one is an attack on all, this could easily spread to the UK. One simple precaution would be for EAs to locate in the suburbs where the risk of being hit his lower (as I have done). The economics of this appear to be favorable because housing prices are typically lower in the suburbs, especially if you can move by rail that is low risk and good potential for multitasking. I would like to formalize this into a paper, but I would need a collaborator.
Interesting, are you concerned that in a full-scale nuclear war that most places in the northern hemisphere would be unsafe due to military targets outside the cities and fallout?
What do you think about this Q&A on Quora about where it would be safest in the event of a nuclear war? Most of the suggested safe locations are in the southern hemisphere like New Zealand.
Most of the Quora discussion seems reasonable for the safest locations. But it is a pretty big sacrifice to change countries just because of the threat of nuclear war. So I am looking at lower cost options. Also, being outside the target countries even in the northern hemisphere would generally not be too bad because the radiation largely rains out within a few days. And even within the target countries if you are not hit by the blast/fire, you’re most likely to survive. I believe the radiation exposure would be lower than Chernobyl, which took about one year of life off the people nearby.
Depending on the circumstances, a focus on preserving EA as a movement and avoiding disruptions to existing top philanthropic opportunities may miss the most important opportunities. My guess is that we’ll do better asking questions like:
What types of disruptions might hamper our ability to coordinate with one another and outsiders to improve the world or mitigate emerging problems? (Different sub-problems may demand very different solutions.)
How can we solve these problems in a way that works for EA and other individuals and groups trying to do good? (We should try to generate solutions that transfer well, not just solve the problem for ourselves.)
Who else is already working on similar problems RE making global cooperation more robust to war or other likely disruptive events? What can we do to help them or benefit from their help?
What disruptions are EAs especially well placed to mitigate?
Which interventions are likely to be most important in the event of various disruptions?
Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I’m now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!
I would love to do this!
Thanks for the inspiration, Ben! :D
At the current time, I suspect the largest disaster risk is war in the US or UK. That’s why I’m focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.
I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like “What is the worst threat to productivity in the effective altruism movement?” and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.
I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.
I tend to keep an eye on risks in everything that’s important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.
As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I’m wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that’s a great way to avoid paralysis. I’ve been thinking about solutions as well.
After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.
I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I’ve upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I’ve got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!
Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)
I like this one. If you plan to do good in an uncertain future, it makes sense to take advantage of altruism’s risk neutrality and put a lot of effort into scenarios that are reasonably likely but also favour your own impact.
In the event of a major disruption or catastrophe such as a war or negative political event in the EA heartland, this would mean that global health work would suddenly become pretty useless—no-one would have the will or means to help distant (in space) people. But we would suddenly have much more leverage to help people who are distant in time, by trying to positively affect any recovery of civilisation. That could be by making it happen sooner, or by giving it some form of aid that is cheap for us. Robust preservation of information is a good idea. If there were a major disaster that destroyed the internet and most servers, and then a long period of civilisational downtime, it might make sense to try and save and distribute key information, for example Wikipedia, the, certain key books, sites, courses, etc.
There might also be attempts to distort history in a very thorough way. Perhaps steps can be taken against this.