I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, “were we the but-for cause?”
A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release—that morning prequalification had been announced. Did it happen sooner because of us?
The short answer is we don’t know. The reason I’m writing about it is that it highlights a type of causal uncertainty that I think is common (though not universal) in advocacy campaigns and should be relevant to EA thinking.
In some campaigns, you find yourself on the inside of a decision-maker’s process in a way that can give you some amount of certainty as to your role.[1] For my kidney donor reimbursement campaign at Waitlist Zero (pre-1Day Sooner), I saw some text related to some Trump administration actions before they happened, had good transparency into the decision-making behind the Advancing American Kidney Health Initiative that my policy was a part of, and had decent confidence that my work was a but-for cause.
But for others, like the W.H.O. prequalification above or the Biden Admnistration’s announcement of Project NextGen, things are much fuzzier. Something you advocated for happens without your getting advance notice. You’ve made the case for it publicly and perhaps exercised some levers to pressure decision-makers. Did you influence the outcome? How can you know?
I’m highlighting this experience because when it happened with NextGen I didn’t really understand how to think about it, and now with prequalification I’m at least noticing the common pattern. (To be clear, I think the case for our causal influence on prequalification is stronger than for NextGen).
I think it’s of note for EA advocates because it raises the challenge of evaluating advocacy through a consequentialist framework. To me the strongest theoretical challenge to consequentialism is uncertainty and the unknowability of the future. The uncertainty of advocacy impact is a very practical example of this broader challenge.
One thought about this I’d sent to a funder who asked about the NextGen campaign is the below:
So like ideally what you want as an advocacy group is to be inside-track on relevant decisions but maybe second-best is to be creating a narrative out of which the decision you are seeking manifests. That is to say we put on the performance of the need/desirability of a “Warp Speed 2” that the relevant decision-makers witnessed and participated in (even if it is unknowable whether it was causal or they were reactive to it) and this performative scaffold is sort of prima facie valuable for goals that are valuable (and the nature of advocacy/consequentialism is that oftentimes that’s all that will be knowable even during a successful campaign).
To be clear, the prequalification advocacy story was different than this performative scaffolding concept—the most obvious way we may have been causally relevant is that the comms department of the relevant entities were likely getting journalistic inquiries about the issue from some major outlets, which very possibly scared the bejesus out of them and increased the desirability of hurrying up.
I raise these case studies because I hope they can provoke further thought, discussion, and insights from EAs involved in advocacy work.
Even this is complicated because decision-makers often have an incentive to flatter you and for many issues, even if you’re on the inside of the process you don’t know if the process would have happened without you
As someone who works on comms stuff, I struggle with this a lot too! One thing I’ve found helpful is just asking decision makers, or people close to decision makers, why they did something. It’s imperfect, but often helpful — e.g. when I’ve asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I’ve heard from people that Ian Hogarth’s FT article may have achieved a similar effect in the UK.)
There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it’s probably at least in part because of the article. If readership/discussion was low, you’re probably not the cause.
Great to see more advocacy and advocacy evaluation-related content on the EA Forum! Sharing a few things that might be of interest to you / others
Founders Pledge has a great document on evaluating policy organisations that puts forward some interesting considerations on evaluating the counterfactual impact of an org e.g.
“We gather evidence from independent referees and written sources to confirm or disconfirm the charity’s own account of their impact. Below is a hierarchy of testimony evidence, ranked from the most to least desirable.
1. Well-informed people with incentives to downplay the role played by the organisation (e.g. a rival organisation)
2. Well-informed people with no incentive to mislead about the role played by the organisation (e.g. Government bureaucrats or politicians who were directly involved in the policy change”
3. Well-informed people who have incentives to mislead about the role played by the organisation (e.g. An organisation’s campaign partners.)
4. People with less information on the role played by the organisation” [I made small edits to make this shorter]
My recommendation to policy people, having worked in policy, is where possible, name things (initiatives, policies, bills, organisations). It makes it much easier to evaluate your impact in the future, if something does get set up and it has the name that you gave it!
I haven’t had time to read all the discourse about Manifest (which I attended), but it does highlight a broader issue about EA that I think is poorly understood, which is that different EAs will necessarily have ideological convictions that are inconsistent with one another.
That is, some people will feel their effective altruist convictions motivate them to work to build artificial intelligence at OpenAI or Anthropic; others will think those companies are destroying the world. Some will try to save lives by distributing medicines; others will think the people those medicines save eat enough tortured animals to generally make the world worse off. Some will think liberal communities should exclude people who champion the existence of racial differences in intelligence; others will think excluding people for their views is profoundly harmful and illiberal.
I’d argue that the early history of effective altruism (i.e. the last 10-15 years) has generally been one of centralization around purist goals—i.e. there’re central institutions that effective altruism revolves around and specific causes and ideas that are the most correct form of effective altruism. I’m personally much more a proponent of liberal, member-first effective altruism than purist, cause-first EA. I’m not sure which of those options the Manifest example supports, but I do think it’s indicative of the broader reality that for a number of issues, people on each side can believe the most effective altruist thing to do is to defeat the other.
For a while I’ve been thinking about an idea I call Artificial Environment risk, but I haven’t had the time to develop the concept in detail (or come up with a better name). The idea is roughly that the natural environment is relatively robust (since it’s been around for a long time, and average species extinction rates and reasons are somewhat predictable), but as a higher proportion of humanity’s environment is artificially created we voyage into an unknown without a track record of safety or stability. So the risk of dangerous phenomena increase dramatically—as we go to a paradigm where we depend on an environment that has been roughly stable for millions of years to one for which exists evidence measured maybe in decades (or less!). Global warming and ozone degradation are obvious examples of this. But I also think that AI risk, biosecurity, nuclear war etc. fall under this (perhaps overly large) umbrella—as humans gain more and more ability to manipulate our environment, we accumulate more and more opportunities for systemic destruction.
Some risks, like global warming, are fairly observable and relatively straightforward epistemically. But then things like the attention environment and influence of social media are more complicated—as tools for attracting attention become more and more powerful, it creates unforeseen (and hard-to-specify or verify) effects (e.g. perhaps education polarization, weakening of elite mediation of information, harm to public discourse) that may be quite influential (and may interact with or compound other effects) but will be hard to plan for, understand, or mitigate.
One reason I find the idea worth trying to sketch out is, assuming technological development continues to progress, as humanity’s control over our environment increases, risk will generally continue to rise, and fewer technologies will be realistically considered riskless. (So we will have more tradeoffs about things like whether to develop technologies that can end infectious disease but also enable better weapons).
The idea of differential technological acceleration is aimed at this problem, but I am not sure how predictable offense/defense will be or how to effectively make political decisions about which fields and industries to nurture or cull. Part of the implication I draw from categorizing this broad set of risks together is that the space for new scientific and technological development will become more crowded—with fewer value-neutral or obviously positive opportunities for growth over time.
I think this may also tend to manifest in more of a clear division between what you might call left-EAs (progress-focused) and right-EAs (security-focused) (in some sense this corresponds to global health focused EAs vs. existential risk focused EAs currently, but the division is less clear). But that also goes to a separate view I have that EAs will have to accept more internal ideological diversity over time and recognize that the goals of different effective altruists will conflict with one another (e.g. extending human lifespan may be bad for animal welfare; synthetic biology advances to cure infectious disease may increase risks to biohazards etc.).
Very possible these ideas aren’t original—as I said they’re very thinly sketched at the moment, but have been thinking about them for a while so figured I should write them out.
This paper starts with a simple model that formalizes a tradeoff between technological progress increasing growth/wellbeing/consumption, and having a small chance of a massive disaster that kills off a lot of people. When do we choose to keep growing? The intuitive idea is that we should keep growing if the growth rate is higher than (the odds ratio of death) times (the dollar value of a statistical life). If the value of life in dollar terms is low—because everyone is desperately poor, so dollars are worth a lot—then growth is worth it. But under very mild assumptions about preferences, where the value of life relative to money grows with income, we will eventually choose to stop growing.
However, the question becomes different is the value of new technologies is saving lives rather than increasing prosperity. Money has diminishing marginal utility; life does not. So technology that saves lives with certainty, but destroys a lot of lives with some probability, is just a gamble. We decide to keep progressing if it saves more lives in expectation than stopping, but unfortunately that’s not a very helpful answer.
The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F “Before we go”) could be another example.
On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don’t become visible for a long time.
I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, “were we the but-for cause?”
A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release—that morning prequalification had been announced. Did it happen sooner because of us?
The short answer is we don’t know. The reason I’m writing about it is that it highlights a type of causal uncertainty that I think is common (though not universal) in advocacy campaigns and should be relevant to EA thinking.
In some campaigns, you find yourself on the inside of a decision-maker’s process in a way that can give you some amount of certainty as to your role.[1] For my kidney donor reimbursement campaign at Waitlist Zero (pre-1Day Sooner), I saw some text related to some Trump administration actions before they happened, had good transparency into the decision-making behind the Advancing American Kidney Health Initiative that my policy was a part of, and had decent confidence that my work was a but-for cause.
But for others, like the W.H.O. prequalification above or the Biden Admnistration’s announcement of Project NextGen, things are much fuzzier. Something you advocated for happens without your getting advance notice. You’ve made the case for it publicly and perhaps exercised some levers to pressure decision-makers. Did you influence the outcome? How can you know?
I’m highlighting this experience because when it happened with NextGen I didn’t really understand how to think about it, and now with prequalification I’m at least noticing the common pattern. (To be clear, I think the case for our causal influence on prequalification is stronger than for NextGen).
I think it’s of note for EA advocates because it raises the challenge of evaluating advocacy through a consequentialist framework. To me the strongest theoretical challenge to consequentialism is uncertainty and the unknowability of the future. The uncertainty of advocacy impact is a very practical example of this broader challenge.
One thought about this I’d sent to a funder who asked about the NextGen campaign is the below:
To be clear, the prequalification advocacy story was different than this performative scaffolding concept—the most obvious way we may have been causally relevant is that the comms department of the relevant entities were likely getting journalistic inquiries about the issue from some major outlets, which very possibly scared the bejesus out of them and increased the desirability of hurrying up.
I raise these case studies because I hope they can provoke further thought, discussion, and insights from EAs involved in advocacy work.
Even this is complicated because decision-makers often have an incentive to flatter you and for many issues, even if you’re on the inside of the process you don’t know if the process would have happened without you
As someone who works on comms stuff, I struggle with this a lot too! One thing I’ve found helpful is just asking decision makers, or people close to decision makers, why they did something. It’s imperfect, but often helpful — e.g. when I’ve asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I’ve heard from people that Ian Hogarth’s FT article may have achieved a similar effect in the UK.)
There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it’s probably at least in part because of the article. If readership/discussion was low, you’re probably not the cause.
After doing a whole lot of really complicated and impressive calculations I think it happened one day sooner because of you ;).
Great to see more advocacy and advocacy evaluation-related content on the EA Forum! Sharing a few things that might be of interest to you / others
Founders Pledge has a great document on evaluating policy organisations that puts forward some interesting considerations on evaluating the counterfactual impact of an org e.g.
“We gather evidence from independent referees and written sources to confirm or disconfirm the charity’s own account of their impact. Below is a hierarchy of testimony evidence, ranked from the most to least desirable.
1. Well-informed people with incentives to downplay the role played by the organisation (e.g. a rival organisation)
2. Well-informed people with no incentive to mislead about the role played by the organisation (e.g. Government bureaucrats or politicians who were directly involved in the policy change”
3. Well-informed people who have incentives to mislead about the role played by the organisation (e.g. An organisation’s campaign partners.)
4. People with less information on the role played by the organisation” [I made small edits to make this shorter]
I also recommend Hear This Idea’s podcast with Steven Teles, a political scientist who wrote a great book about advocacy within the conservative legal movement and an article about the difficulty of evaluating advocacy.
Great post!
My recommendation to policy people, having worked in policy, is where possible, name things (initiatives, policies, bills, organisations). It makes it much easier to evaluate your impact in the future, if something does get set up and it has the name that you gave it!
I haven’t had time to read all the discourse about Manifest (which I attended), but it does highlight a broader issue about EA that I think is poorly understood, which is that different EAs will necessarily have ideological convictions that are inconsistent with one another.
That is, some people will feel their effective altruist convictions motivate them to work to build artificial intelligence at OpenAI or Anthropic; others will think those companies are destroying the world. Some will try to save lives by distributing medicines; others will think the people those medicines save eat enough tortured animals to generally make the world worse off. Some will think liberal communities should exclude people who champion the existence of racial differences in intelligence; others will think excluding people for their views is profoundly harmful and illiberal.
I’d argue that the early history of effective altruism (i.e. the last 10-15 years) has generally been one of centralization around purist goals—i.e. there’re central institutions that effective altruism revolves around and specific causes and ideas that are the most correct form of effective altruism. I’m personally much more a proponent of liberal, member-first effective altruism than purist, cause-first EA. I’m not sure which of those options the Manifest example supports, but I do think it’s indicative of the broader reality that for a number of issues, people on each side can believe the most effective altruist thing to do is to defeat the other.
Surely the Manifest example is more individualist? - it isn’t an EA event (nor even a rationalist one)
Descriptively I agree, but normatively it’s not obvious to me which alternative it supports
For a while I’ve been thinking about an idea I call Artificial Environment risk, but I haven’t had the time to develop the concept in detail (or come up with a better name). The idea is roughly that the natural environment is relatively robust (since it’s been around for a long time, and average species extinction rates and reasons are somewhat predictable), but as a higher proportion of humanity’s environment is artificially created we voyage into an unknown without a track record of safety or stability. So the risk of dangerous phenomena increase dramatically—as we go to a paradigm where we depend on an environment that has been roughly stable for millions of years to one for which exists evidence measured maybe in decades (or less!). Global warming and ozone degradation are obvious examples of this. But I also think that AI risk, biosecurity, nuclear war etc. fall under this (perhaps overly large) umbrella—as humans gain more and more ability to manipulate our environment, we accumulate more and more opportunities for systemic destruction.
Some risks, like global warming, are fairly observable and relatively straightforward epistemically. But then things like the attention environment and influence of social media are more complicated—as tools for attracting attention become more and more powerful, it creates unforeseen (and hard-to-specify or verify) effects (e.g. perhaps education polarization, weakening of elite mediation of information, harm to public discourse) that may be quite influential (and may interact with or compound other effects) but will be hard to plan for, understand, or mitigate.
One reason I find the idea worth trying to sketch out is, assuming technological development continues to progress, as humanity’s control over our environment increases, risk will generally continue to rise, and fewer technologies will be realistically considered riskless. (So we will have more tradeoffs about things like whether to develop technologies that can end infectious disease but also enable better weapons).
The idea of differential technological acceleration is aimed at this problem, but I am not sure how predictable offense/defense will be or how to effectively make political decisions about which fields and industries to nurture or cull. Part of the implication I draw from categorizing this broad set of risks together is that the space for new scientific and technological development will become more crowded—with fewer value-neutral or obviously positive opportunities for growth over time.
I think this may also tend to manifest in more of a clear division between what you might call left-EAs (progress-focused) and right-EAs (security-focused) (in some sense this corresponds to global health focused EAs vs. existential risk focused EAs currently, but the division is less clear). But that also goes to a separate view I have that EAs will have to accept more internal ideological diversity over time and recognize that the goals of different effective altruists will conflict with one another (e.g. extending human lifespan may be bad for animal welfare; synthetic biology advances to cure infectious disease may increase risks to biohazards etc.).
Very possible these ideas aren’t original—as I said they’re very thinly sketched at the moment, but have been thinking about them for a while so figured I should write them out.
This paper starts with a simple model that formalizes a tradeoff between technological progress increasing growth/wellbeing/consumption, and having a small chance of a massive disaster that kills off a lot of people. When do we choose to keep growing? The intuitive idea is that we should keep growing if the growth rate is higher than (the odds ratio of death) times (the dollar value of a statistical life). If the value of life in dollar terms is low—because everyone is desperately poor, so dollars are worth a lot—then growth is worth it. But under very mild assumptions about preferences, where the value of life relative to money grows with income, we will eventually choose to stop growing.
However, the question becomes different is the value of new technologies is saving lives rather than increasing prosperity. Money has diminishing marginal utility; life does not. So technology that saves lives with certainty, but destroys a lot of lives with some probability, is just a gamble. We decide to keep progressing if it saves more lives in expectation than stopping, but unfortunately that’s not a very helpful answer.
The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F “Before we go”) could be another example.
On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don’t become visible for a long time.