I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
100%
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
Thanks for the response!
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.