Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I’m now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!
I would love to do this!
Thanks for the inspiration, Ben! :D
At the current time, I suspect the largest disaster risk is war in the US or UK. That’s why I’m focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.
I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like “What is the worst threat to productivity in the effective altruism movement?” and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.
I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.
I tend to keep an eye on risks in everything that’s important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.
As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I’m wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that’s a great way to avoid paralysis. I’ve been thinking about solutions as well.
After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.
I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I’ve upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I’ve got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!
Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)
Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I’m now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!
I would love to do this!
Thanks for the inspiration, Ben! :D
At the current time, I suspect the largest disaster risk is war in the US or UK. That’s why I’m focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.
I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like “What is the worst threat to productivity in the effective altruism movement?” and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.
I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.
I tend to keep an eye on risks in everything that’s important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.
As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I’m wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that’s a great way to avoid paralysis. I’ve been thinking about solutions as well.
After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.
I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I’ve upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I’ve got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!
Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)