I agree that systematic change should be given more thought in EA, but there’s a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change.
To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of ‘pyramid’. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small—you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities… then those that focus on changing larger policy and practice … then changing attitudes and norms (or some types of systematic change) … and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program.
My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind—the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities—it’s like trying to peel a potato while blindfolded using a pencil: it’s not what the pencil was made for, and even though it is technically possible I’ll be damned if the blindfolded person actually has a clue if it’s working or not).
We should definitely keep our commitment to these tools, but if we want to be rigorous about exploring systematic risks, we should probably start by figuring out how to expand our toolbox in order to address these issues as rigorously as possible (and, importantly, to figure out when exactly our current tools are insufficient! We already have these for a lot of our tools—basically assumptions that, when broken, break the tool—but I haven’t seen people rigorously consulting them!). I’m sure that a lot of us have in mind some very clear ideas of how we can/should rigorously prioritize and evaluate various systematic risks—but I’m pretty sure we have as many opinions as we have people. We need to get on the same page first, which is why I’d suggest that we work on figuring out some basic standards and tools for moving forward, then going from there. Expanding our toolkit is key, though—perhaps someone should look into other disciplines that could help out? I’d do it, but I’m lazy and tired and probably would make a hash of it anyway.
“I’d do it, but I’m lazy and tired and probably would make a hash of it anyway.”—you seem rather knowledgeable, so I doubt that. I’ve heard it said that the perfect is the enemy of the good and a top level approach that was maybe twice the size of the above comment and which just provided an extremely basic overview would be a great place to start and would encourage further investigation by other people.
I agree that systematic change should be given more thought in EA, but there’s a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change.
To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of ‘pyramid’. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small—you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities… then those that focus on changing larger policy and practice … then changing attitudes and norms (or some types of systematic change) … and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program.
My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind—the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities—it’s like trying to peel a potato while blindfolded using a pencil: it’s not what the pencil was made for, and even though it is technically possible I’ll be damned if the blindfolded person actually has a clue if it’s working or not).
We should definitely keep our commitment to these tools, but if we want to be rigorous about exploring systematic risks, we should probably start by figuring out how to expand our toolbox in order to address these issues as rigorously as possible (and, importantly, to figure out when exactly our current tools are insufficient! We already have these for a lot of our tools—basically assumptions that, when broken, break the tool—but I haven’t seen people rigorously consulting them!). I’m sure that a lot of us have in mind some very clear ideas of how we can/should rigorously prioritize and evaluate various systematic risks—but I’m pretty sure we have as many opinions as we have people. We need to get on the same page first, which is why I’d suggest that we work on figuring out some basic standards and tools for moving forward, then going from there. Expanding our toolkit is key, though—perhaps someone should look into other disciplines that could help out? I’d do it, but I’m lazy and tired and probably would make a hash of it anyway.
“I’d do it, but I’m lazy and tired and probably would make a hash of it anyway.”—you seem rather knowledgeable, so I doubt that. I’ve heard it said that the perfect is the enemy of the good and a top level approach that was maybe twice the size of the above comment and which just provided an extremely basic overview would be a great place to start and would encourage further investigation by other people.