I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, “were we the but-for cause?”
A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release—that morning prequalification had been announced. Did it happen sooner because of us?
The short answer is we don’t know. The reason I’m writing about it is that it highlights a type of causal uncertainty that I think is common (though not universal) in advocacy campaigns and should be relevant to EA thinking.
In some campaigns, you find yourself on the inside of a decision-maker’s process in a way that can give you some amount of certainty as to your role.[1] For my kidney donor reimbursement campaign at Waitlist Zero (pre-1Day Sooner), I saw some text related to some Trump administration actions before they happened, had good transparency into the decision-making behind the Advancing American Kidney Health Initiative that my policy was a part of, and had decent confidence that my work was a but-for cause.
But for others, like the W.H.O. prequalification above or the Biden Admnistration’s announcement of Project NextGen, things are much fuzzier. Something you advocated for happens without your getting advance notice. You’ve made the case for it publicly and perhaps exercised some levers to pressure decision-makers. Did you influence the outcome? How can you know?
I’m highlighting this experience because when it happened with NextGen I didn’t really understand how to think about it, and now with prequalification I’m at least noticing the common pattern. (To be clear, I think the case for our causal influence on prequalification is stronger than for NextGen).
I think it’s of note for EA advocates because it raises the challenge of evaluating advocacy through a consequentialist framework. To me the strongest theoretical challenge to consequentialism is uncertainty and the unknowability of the future. The uncertainty of advocacy impact is a very practical example of this broader challenge.
One thought about this I’d sent to a funder who asked about the NextGen campaign is the below:
So like ideally what you want as an advocacy group is to be inside-track on relevant decisions but maybe second-best is to be creating a narrative out of which the decision you are seeking manifests. That is to say we put on the performance of the need/desirability of a “Warp Speed 2” that the relevant decision-makers witnessed and participated in (even if it is unknowable whether it was causal or they were reactive to it) and this performative scaffold is sort of prima facie valuable for goals that are valuable (and the nature of advocacy/consequentialism is that oftentimes that’s all that will be knowable even during a successful campaign).
To be clear, the prequalification advocacy story was different than this performative scaffolding concept—the most obvious way we may have been causally relevant is that the comms department of the relevant entities were likely getting journalistic inquiries about the issue from some major outlets, which very possibly scared the bejesus out of them and increased the desirability of hurrying up.
I raise these case studies because I hope they can provoke further thought, discussion, and insights from EAs involved in advocacy work.
Even this is complicated because decision-makers often have an incentive to flatter you and for many issues, even if you’re on the inside of the process you don’t know if the process would have happened without you
As someone who works on comms stuff, I struggle with this a lot too! One thing I’ve found helpful is just asking decision makers, or people close to decision makers, why they did something. It’s imperfect, but often helpful — e.g. when I’ve asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I’ve heard from people that Ian Hogarth’s FT article may have achieved a similar effect in the UK.)
There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it’s probably at least in part because of the article. If readership/discussion was low, you’re probably not the cause.
Great to see more advocacy and advocacy evaluation-related content on the EA Forum! Sharing a few things that might be of interest to you / others
Founders Pledge has a great document on evaluating policy organisations that puts forward some interesting considerations on evaluating the counterfactual impact of an org e.g.
“We gather evidence from independent referees and written sources to confirm or disconfirm the charity’s own account of their impact. Below is a hierarchy of testimony evidence, ranked from the most to least desirable.
1. Well-informed people with incentives to downplay the role played by the organisation (e.g. a rival organisation)
2. Well-informed people with no incentive to mislead about the role played by the organisation (e.g. Government bureaucrats or politicians who were directly involved in the policy change”
3. Well-informed people who have incentives to mislead about the role played by the organisation (e.g. An organisation’s campaign partners.)
4. People with less information on the role played by the organisation” [I made small edits to make this shorter]
My recommendation to policy people, having worked in policy, is where possible, name things (initiatives, policies, bills, organisations). It makes it much easier to evaluate your impact in the future, if something does get set up and it has the name that you gave it!
I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, “were we the but-for cause?”
A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release—that morning prequalification had been announced. Did it happen sooner because of us?
The short answer is we don’t know. The reason I’m writing about it is that it highlights a type of causal uncertainty that I think is common (though not universal) in advocacy campaigns and should be relevant to EA thinking.
In some campaigns, you find yourself on the inside of a decision-maker’s process in a way that can give you some amount of certainty as to your role.[1] For my kidney donor reimbursement campaign at Waitlist Zero (pre-1Day Sooner), I saw some text related to some Trump administration actions before they happened, had good transparency into the decision-making behind the Advancing American Kidney Health Initiative that my policy was a part of, and had decent confidence that my work was a but-for cause.
But for others, like the W.H.O. prequalification above or the Biden Admnistration’s announcement of Project NextGen, things are much fuzzier. Something you advocated for happens without your getting advance notice. You’ve made the case for it publicly and perhaps exercised some levers to pressure decision-makers. Did you influence the outcome? How can you know?
I’m highlighting this experience because when it happened with NextGen I didn’t really understand how to think about it, and now with prequalification I’m at least noticing the common pattern. (To be clear, I think the case for our causal influence on prequalification is stronger than for NextGen).
I think it’s of note for EA advocates because it raises the challenge of evaluating advocacy through a consequentialist framework. To me the strongest theoretical challenge to consequentialism is uncertainty and the unknowability of the future. The uncertainty of advocacy impact is a very practical example of this broader challenge.
One thought about this I’d sent to a funder who asked about the NextGen campaign is the below:
To be clear, the prequalification advocacy story was different than this performative scaffolding concept—the most obvious way we may have been causally relevant is that the comms department of the relevant entities were likely getting journalistic inquiries about the issue from some major outlets, which very possibly scared the bejesus out of them and increased the desirability of hurrying up.
I raise these case studies because I hope they can provoke further thought, discussion, and insights from EAs involved in advocacy work.
Even this is complicated because decision-makers often have an incentive to flatter you and for many issues, even if you’re on the inside of the process you don’t know if the process would have happened without you
As someone who works on comms stuff, I struggle with this a lot too! One thing I’ve found helpful is just asking decision makers, or people close to decision makers, why they did something. It’s imperfect, but often helpful — e.g. when I’ve asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I’ve heard from people that Ian Hogarth’s FT article may have achieved a similar effect in the UK.)
There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it’s probably at least in part because of the article. If readership/discussion was low, you’re probably not the cause.
After doing a whole lot of really complicated and impressive calculations I think it happened one day sooner because of you ;).
Great to see more advocacy and advocacy evaluation-related content on the EA Forum! Sharing a few things that might be of interest to you / others
Founders Pledge has a great document on evaluating policy organisations that puts forward some interesting considerations on evaluating the counterfactual impact of an org e.g.
“We gather evidence from independent referees and written sources to confirm or disconfirm the charity’s own account of their impact. Below is a hierarchy of testimony evidence, ranked from the most to least desirable.
1. Well-informed people with incentives to downplay the role played by the organisation (e.g. a rival organisation)
2. Well-informed people with no incentive to mislead about the role played by the organisation (e.g. Government bureaucrats or politicians who were directly involved in the policy change”
3. Well-informed people who have incentives to mislead about the role played by the organisation (e.g. An organisation’s campaign partners.)
4. People with less information on the role played by the organisation” [I made small edits to make this shorter]
I also recommend Hear This Idea’s podcast with Steven Teles, a political scientist who wrote a great book about advocacy within the conservative legal movement and an article about the difficulty of evaluating advocacy.
Great post!
My recommendation to policy people, having worked in policy, is where possible, name things (initiatives, policies, bills, organisations). It makes it much easier to evaluate your impact in the future, if something does get set up and it has the name that you gave it!