Wow, this is really fantastic work! Thank you for the effort you put into this. Overall I think this paints a more optimistic picture of lobbying than I would have expected, which I find encouraging.
To follow up on a couple specific points:
(1) Just in terms of my own project planning, do you have an estimate of how long you spent on this? If you had another 40 hours, what uncertainties would you seek to reduce?
(2) Your discussion of Bumgartner et al. (2009) is super interesting. You write “Policy change happens over a long time frame.” I wonder if you could expand on this briefly. Do you mean that it takes a lot of lobbying over years before a policy change happens, or do you mean that meaningful policy change happens through incremental policy changes over time?
(3) Your finding that lobbying which protects the status quo is much more likely to be effective seems particularly actionable. I mean, once put into words it seems obvious, but it’s a point I hadn’t thought about before. I notice, though, that your list of ideas seems to consist of positive changes rather than status quo protection. I wonder if it would be worth brainstorming a list of good status quo issues that might be under threat. Protecting these would be less exciting than big changes, but for exactly the reasons you outline here more likely to work!
(4) I’m interested in thinking a bit more about uncertainty about policy implementation. This is something that we’re currently grappling with in our models of policy change where I work (Founders Pledge). On the one hand, the Tullock Paradox suggests that we should expect lobbying to be extremely difficult (otherwise everyone would do a lot more of it). On the other hand, we’ve noticed that very good policy advocates seem to quite regularly affect meaningful policy changes (for example, it seems like the Clean Air Task Force regularly succeeds in their work).
In your model you write that “the change in probability of policy implementation lies with 95% confidence between 0 and 5%, and is distributed normally.” I’m not sure about this, but I imagine the distribution of “chance of affecting policy success” over all the possible policies we could work on is much flatter than this. Or perhaps it’s bimodal: there are some issues on which it is near impossible to make progress and some issues where we could definitely get policies implemented if we spent a certain amount of money in the right way.
Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we’re working on. Do you think we’d be able to identify highly-likely policies in practice?
(5) I found this post super helpful, but overall I think I’m still quite puzzled by the Tullock Paradox. If anything I’m more confused now, given that this post made me update in favour of policy advocacy. I think perhaps something that’s missing here is a discussion of incentives within the civil service or bureaucracy. A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening. I don’t blame you for excluding this issue considering the size of this post and the amount of research you’ve already done, but it seems worth flagging!
Thanks again for a great post! I’m really excited about more work in this vein.
(1) I spent something like 100 hours on this over the course of several months. I think I could have cut this by something like 30-40% if I’d been a little bit more attentive to the scope of the research. I decided on the scope (assessing the effectiveness of national-level legislative lobbying in the U.S.) at the beginning of the project, but I repeatedly wound up off track, pursuing lines of research outside of what I’d decided to focus on. I also spent a good chunk of time on the GitHub repo with the setup for analyzing lobbying data, which wasn’t directly related to the lit review but which I felt served the goal of presenting this as a foundation for further research.
If I had 40 more hours, I’d intentionally pursue an expanded scope. In particular, I’d want to fully review the research on lobbying of (a) regulatory agencies and (b) state and local governments. I explicitly excluded studies along those lines, some of which were very interesting.
(2) Thanks for asking for clarification on this. Baumgartner et al mean that it takes a long time for policy change to be observed on any given issue. After starting to pursue a policy goal, lobbyists are more likely to see success after four years than after two.
Baumgartner et al include a chapter that is mostly critical of the incrementalist idea of policy change, which they trace to Charles Lindblom’s 1959 article The Science of “Muddling Through”. Incrementalism is tied to Herbert Simon’s idea of “bounded rationality.” Broadly, the incrementalist idea is that policymakers face a broad universe of possible policy options, and in order to reduce the landscape to a manageable set, they choose from only the most available options, e.g. those closest to the status quo: “incremental” changes.
Frank Baumgartner, with Bryan Jones, is now well-known for their theory of “punctuated equilibrium.” This is a partial alternative to incrementalism which uses the analogy of friction to understand policy change. Basically: the pressure builds on an issue over a period of time, during which no change occurs. After the pressure is overwhelming, policy shifts in a major way.
I say that punctuated equilibrium is a “partial” alternative because Baumgartner and Jones actually collected data that seems to demonstrate that policy change follows a steeply peeked, fat-tailed distribution. Their overall takeaway is that very small changes are overwhelmingly common, but moderate changes are relatively uncommon, and very large changes are surprisingly common. To come back to your question, Baumgartner et al might say that although most policy change is incremental—like year-to-year changes in agency budgets—meaningful policy change happens in a big way, all of a sudden.
(3) I agree with you. I think some of my suggested policies are not likely to be those most effectively advocated for, and I included them just to give a flavor of the types of things we might care about lobbying for. Coming up with more practicable ideas is, I think, a much bigger, much longer-term project.
I also think that although lobbying for the status quo is more effective all other things being equal, it may not be the best use of EA resources to focus exclusively on that side of things. That’s because (per the counteractive lobbying theory) on many issues there is are latent interests that will arise to lobby against harmful proposals. It’s hard to identify beforehand which proposals will stimulate this opposition, so there’s a lot of prior uncertainty as to whether funding opposition to policy change is marginally useful in expectation.
(4) There are a lot of takes on the Tullock paradox, but I’ll present two broad possible explanations.
Explanation A: Lobbying is basically ineffective, and the reason we don’t see more lobbying is that most organizations recognize its ineffectiveness.
Explanation B: Lobbying is highly effective, and the reason we don’t see more lobbying is that relatively small expenditures can exert enormous amounts of leverage.
Given the evidence here, I’m starting to be a lot more inclined toward Explanation B. I think it’s demonstrably not the case, as you have noted with respect to the Clean Air Task Force, that organizations that lobby are wasting their money. For both altruistic and self-interested interest groups, the rewards to be captured are very large, and they make it worth the risk of wasting money. Alexander, Scholz, and Mazza (2009), for example, find a 22,000% return on investment.
If Explanation B holds, then the question is really just why the market for policy isn’t efficient. Why hasn’t the price of lobbying been bid up to the value of the rewards to be captured? I think it seems likely that this is down to multiple layers of information asymmetry (between legislators and their staffs, between these staffers and lobbyists, between lobbyists and their clients, etc.), which create multiple layers of uncertainty and drive the expected value of lobbying down from the standpoint of those in a position to purchase it.
I agree with you that a normal distribution is probably not the best choice to model the expected incremental change in probability. I felt like, given my CI for this figure and my sense that values closer to 0% and values closer to 5% were each less likely than values in the middle of that range, this served my purposes here—but please take my code and modify as you see fit!
Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we’re working on. Do you think we’d be able to identify highly-likely policies in practice?
I don’t know. I think it’s worth investigating. It seems like, given an already-existing basket of policies we’d be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.
I have a sense that lobbyists, do, in fact, do something like what you’re describing, and that this is part of the resolution to the Tullock paradox. Money spent on lobbying is not spent all at once: lobbyists can make an effort, check their results, report to their clients, and identify whether or not they’re likely to meet with success in continued expenditure. If lobbying expenditure on a given topic seems unlikely to make a difference, then it can just stop. I wasn’t able to find anything on how this process actually works, so the next step in this research is to actually talk to some lobbyists.
(5)
I think perhaps something that’s missing here is a discussion of incentives within the civil service or bureaucracy
I agree with this too. I’d love for an EA with a public choice background to tackle this topic. I didn’t consider it as part of my scope, but I do want to note something:
A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening.
I think this is probably true in many if not most cases of yet-to-be-implemented policy changes that are obvious, good, and easy. It is probably true in this case. But I want to warn against concluding that, because some obvious, good, and easy policy change has not been implemented, that means that there is some illegible institutional factor that is stopping it from happening. It could just be that no one has been pushing for it. In EA terms, it’s an important and tractable policy change that’s neglected by the policy community. Given what I know about the policy community, it’s not at all difficult for me to imagine that such policies exist.
Thanks for this! Something that came to my mind as I was reading this was that it might be time for an update of CEA’s list of good policy ideas that won’t happen (yet).
You wrote that “It seems like, given an already-existing basket of policies we’d be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.” This made me think it might be useful be to make a list of EA-relevant policy ideas and start organizing them into a Charity Entrepreneurship-style spreadsheet. Something I’ll keep musing on!
I’m also curious about what motivated you to take on this project, and what you’re planning to work on next?
I’m replying again here to note that I’ve struck the salience point from my conclusions. I’ve noted why up top. I now have a lot of uncertainty about whether this is the case or not, and don’t stand by my suggestion that salience is a good guide to resource allocation.
I like this spreadsheet idea and think I may kick it off (if you haven’t already done so!)
I took the project on because I got interested in this topic, went looking for this, couldn’t find it, and decided to make it so that it might be useful to others. I wasn’t feeling very useful in my day job, so it was easy to stay motivated to spend time on this for a while. I tend to be most interested in generalizable or flexible approaches to improving welfare across different domains, and this seemed like it might be one of those.
Some areas I’m thinking about exploring. These are pretty rough thoughts:
Some more exploration of strategies for ameliorating child abuse in light of the well-known ACES Study. GiveWell and RandomEA have both explored Nurse-Family Partnerships. This problem is just so huge in terms of people affected (and in terms of second-order effects) that I think it’s worth exploring a lot more. I’m particularly interested in focusing on child sexual abuse in particular.
Aggregating potentially cost-effective avenues to improve institutional performance. I’m curious about thinking at a higher level of abstraction than institutional decision-making. It seems worthwhile to put together the existing cross-disciplinary evidence on the question: what steps outside of those explicitly focusing on rationality and decision-making can companies/nonprofits/government agencies take to increase the probability that they make good decisions? A good example of one such step is in the apparent evidence that intellectually diverse teams make better decisions.
Long-term cost-effectiveness of stress reduction for pregnant women (with potential effects of infant mortality, maternal health, and long-term outcomes like brain development and violence).
Review of recent innovations that seem to like they might have potential for expediting scientific progress (like grant lotteries)
Wow, this is really fantastic work! Thank you for the effort you put into this. Overall I think this paints a more optimistic picture of lobbying than I would have expected, which I find encouraging.
To follow up on a couple specific points:
(1) Just in terms of my own project planning, do you have an estimate of how long you spent on this? If you had another 40 hours, what uncertainties would you seek to reduce?
(2) Your discussion of Bumgartner et al. (2009) is super interesting. You write “Policy change happens over a long time frame.” I wonder if you could expand on this briefly. Do you mean that it takes a lot of lobbying over years before a policy change happens, or do you mean that meaningful policy change happens through incremental policy changes over time?
(3) Your finding that lobbying which protects the status quo is much more likely to be effective seems particularly actionable. I mean, once put into words it seems obvious, but it’s a point I hadn’t thought about before. I notice, though, that your list of ideas seems to consist of positive changes rather than status quo protection. I wonder if it would be worth brainstorming a list of good status quo issues that might be under threat. Protecting these would be less exciting than big changes, but for exactly the reasons you outline here more likely to work!
(4) I’m interested in thinking a bit more about uncertainty about policy implementation. This is something that we’re currently grappling with in our models of policy change where I work (Founders Pledge). On the one hand, the Tullock Paradox suggests that we should expect lobbying to be extremely difficult (otherwise everyone would do a lot more of it). On the other hand, we’ve noticed that very good policy advocates seem to quite regularly affect meaningful policy changes (for example, it seems like the Clean Air Task Force regularly succeeds in their work).
In your model you write that “the change in probability of policy implementation lies with 95% confidence between 0 and 5%, and is distributed normally.” I’m not sure about this, but I imagine the distribution of “chance of affecting policy success” over all the possible policies we could work on is much flatter than this. Or perhaps it’s bimodal: there are some issues on which it is near impossible to make progress and some issues where we could definitely get policies implemented if we spent a certain amount of money in the right way.
Perhaps we want to start with a low prior chance of policy success, and then update way up or down based on which policy we’re working on. Do you think we’d be able to identify highly-likely policies in practice?
(5) I found this post super helpful, but overall I think I’m still quite puzzled by the Tullock Paradox. If anything I’m more confused now, given that this post made me update in favour of policy advocacy. I think perhaps something that’s missing here is a discussion of incentives within the civil service or bureaucracy. A policy proposal like taking ICBMs off hair-trigger alert just seems so obvious, so good, and so easy that I think there must be some illegible institutional factors within the decision-making structure stopping it from happening. I don’t blame you for excluding this issue considering the size of this post and the amount of research you’ve already done, but it seems worth flagging!
Thanks again for a great post! I’m really excited about more work in this vein.
Thanks for your response!
(1) I spent something like 100 hours on this over the course of several months. I think I could have cut this by something like 30-40% if I’d been a little bit more attentive to the scope of the research. I decided on the scope (assessing the effectiveness of national-level legislative lobbying in the U.S.) at the beginning of the project, but I repeatedly wound up off track, pursuing lines of research outside of what I’d decided to focus on. I also spent a good chunk of time on the GitHub repo with the setup for analyzing lobbying data, which wasn’t directly related to the lit review but which I felt served the goal of presenting this as a foundation for further research.
If I had 40 more hours, I’d intentionally pursue an expanded scope. In particular, I’d want to fully review the research on lobbying of (a) regulatory agencies and (b) state and local governments. I explicitly excluded studies along those lines, some of which were very interesting.
(2) Thanks for asking for clarification on this. Baumgartner et al mean that it takes a long time for policy change to be observed on any given issue. After starting to pursue a policy goal, lobbyists are more likely to see success after four years than after two.
Baumgartner et al include a chapter that is mostly critical of the incrementalist idea of policy change, which they trace to Charles Lindblom’s 1959 article The Science of “Muddling Through”. Incrementalism is tied to Herbert Simon’s idea of “bounded rationality.” Broadly, the incrementalist idea is that policymakers face a broad universe of possible policy options, and in order to reduce the landscape to a manageable set, they choose from only the most available options, e.g. those closest to the status quo: “incremental” changes.
Frank Baumgartner, with Bryan Jones, is now well-known for their theory of “punctuated equilibrium.” This is a partial alternative to incrementalism which uses the analogy of friction to understand policy change. Basically: the pressure builds on an issue over a period of time, during which no change occurs. After the pressure is overwhelming, policy shifts in a major way.
I say that punctuated equilibrium is a “partial” alternative because Baumgartner and Jones actually collected data that seems to demonstrate that policy change follows a steeply peeked, fat-tailed distribution. Their overall takeaway is that very small changes are overwhelmingly common, but moderate changes are relatively uncommon, and very large changes are surprisingly common. To come back to your question, Baumgartner et al might say that although most policy change is incremental—like year-to-year changes in agency budgets—meaningful policy change happens in a big way, all of a sudden.
(3) I agree with you. I think some of my suggested policies are not likely to be those most effectively advocated for, and I included them just to give a flavor of the types of things we might care about lobbying for. Coming up with more practicable ideas is, I think, a much bigger, much longer-term project.
I also think that although lobbying for the status quo is more effective all other things being equal, it may not be the best use of EA resources to focus exclusively on that side of things. That’s because (per the counteractive lobbying theory) on many issues there is are latent interests that will arise to lobby against harmful proposals. It’s hard to identify beforehand which proposals will stimulate this opposition, so there’s a lot of prior uncertainty as to whether funding opposition to policy change is marginally useful in expectation.
(4) There are a lot of takes on the Tullock paradox, but I’ll present two broad possible explanations.
Explanation A: Lobbying is basically ineffective, and the reason we don’t see more lobbying is that most organizations recognize its ineffectiveness.
Explanation B: Lobbying is highly effective, and the reason we don’t see more lobbying is that relatively small expenditures can exert enormous amounts of leverage.
Given the evidence here, I’m starting to be a lot more inclined toward Explanation B. I think it’s demonstrably not the case, as you have noted with respect to the Clean Air Task Force, that organizations that lobby are wasting their money. For both altruistic and self-interested interest groups, the rewards to be captured are very large, and they make it worth the risk of wasting money. Alexander, Scholz, and Mazza (2009), for example, find a 22,000% return on investment.
If Explanation B holds, then the question is really just why the market for policy isn’t efficient. Why hasn’t the price of lobbying been bid up to the value of the rewards to be captured? I think it seems likely that this is down to multiple layers of information asymmetry (between legislators and their staffs, between these staffers and lobbyists, between lobbyists and their clients, etc.), which create multiple layers of uncertainty and drive the expected value of lobbying down from the standpoint of those in a position to purchase it.
I agree with you that a normal distribution is probably not the best choice to model the expected incremental change in probability. I felt like, given my CI for this figure and my sense that values closer to 0% and values closer to 5% were each less likely than values in the middle of that range, this served my purposes here—but please take my code and modify as you see fit!
I don’t know. I think it’s worth investigating. It seems like, given an already-existing basket of policies we’d be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.
I have a sense that lobbyists, do, in fact, do something like what you’re describing, and that this is part of the resolution to the Tullock paradox. Money spent on lobbying is not spent all at once: lobbyists can make an effort, check their results, report to their clients, and identify whether or not they’re likely to meet with success in continued expenditure. If lobbying expenditure on a given topic seems unlikely to make a difference, then it can just stop. I wasn’t able to find anything on how this process actually works, so the next step in this research is to actually talk to some lobbyists.
(5)
I agree with this too. I’d love for an EA with a public choice background to tackle this topic. I didn’t consider it as part of my scope, but I do want to note something:
I think this is probably true in many if not most cases of yet-to-be-implemented policy changes that are obvious, good, and easy. It is probably true in this case. But I want to warn against concluding that, because some obvious, good, and easy policy change has not been implemented, that means that there is some illegible institutional factor that is stopping it from happening. It could just be that no one has been pushing for it. In EA terms, it’s an important and tractable policy change that’s neglected by the policy community. Given what I know about the policy community, it’s not at all difficult for me to imagine that such policies exist.
Thanks for this! Something that came to my mind as I was reading this was that it might be time for an update of CEA’s list of good policy ideas that won’t happen (yet).
You wrote that “It seems like, given an already-existing basket of policies we’d be interested in advocating for, we can make lobbying more cost-effective just by allocating more resources to (e.g.) issues that are less salient to the public.” This made me think it might be useful be to make a list of EA-relevant policy ideas and start organizing them into a Charity Entrepreneurship-style spreadsheet. Something I’ll keep musing on!
I’m also curious about what motivated you to take on this project, and what you’re planning to work on next?
I’m replying again here to note that I’ve struck the salience point from my conclusions. I’ve noted why up top. I now have a lot of uncertainty about whether this is the case or not, and don’t stand by my suggestion that salience is a good guide to resource allocation.
I like this spreadsheet idea and think I may kick it off (if you haven’t already done so!)
I took the project on because I got interested in this topic, went looking for this, couldn’t find it, and decided to make it so that it might be useful to others. I wasn’t feeling very useful in my day job, so it was easy to stay motivated to spend time on this for a while. I tend to be most interested in generalizable or flexible approaches to improving welfare across different domains, and this seemed like it might be one of those.
Some areas I’m thinking about exploring. These are pretty rough thoughts:
Some more exploration of strategies for ameliorating child abuse in light of the well-known ACES Study. GiveWell and RandomEA have both explored Nurse-Family Partnerships. This problem is just so huge in terms of people affected (and in terms of second-order effects) that I think it’s worth exploring a lot more. I’m particularly interested in focusing on child sexual abuse in particular.
Aggregating potentially cost-effective avenues to improve institutional performance. I’m curious about thinking at a higher level of abstraction than institutional decision-making. It seems worthwhile to put together the existing cross-disciplinary evidence on the question: what steps outside of those explicitly focusing on rationality and decision-making can companies/nonprofits/government agencies take to increase the probability that they make good decisions? A good example of one such step is in the apparent evidence that intellectually diverse teams make better decisions.
Long-term cost-effectiveness of stress reduction for pregnant women (with potential effects of infant mortality, maternal health, and long-term outcomes like brain development and violence).
Review of recent innovations that seem to like they might have potential for expediting scientific progress (like grant lotteries)