Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.
Some clarifications on what we have and haven’t funded:
I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
I think we’ve funded a very small number of grants that involve interactions with policymakers – I can think of three such grants in the last year, two of which were for new projects. (In one case, the grantee has requested that we not report the grant publicly).
Responding to the rest of the post:
I think it’s roughly correct that I have a pretty high bar for funding projects that interact with policymakers, and I endorse this policy. (I don’t want to speak for the Long-Term Future Fund as a whole, because it acts more like a collection of fund managers than a single entity, but I suspect many others on the fund also have a high bar, and that my opinion in particular has had a big influence on our past decisions.)
Some other things in your post that I think are roughly true:
Previous experience in policy has been an important factor in my evaluations of these grants, and all else equal I think I am much more likely to fund applicants who are more senior (though I think the “20 years experience” bar is too high).
There have been cases where we haven’t funded projects (more broadly than in policy) because an individual has given us information about or impressions of them that led us to think the project would be riskier or less impactful than we initially believed, and we haven’t shared the identity or information with the applicant to preserve the privacy of the individual.
We have a higher bar for funding organizations than other projects, because they are more likely to stick around even if we decide they’re not worth funding in the future.
When evaluating the more borderline grants in this space, I often ask and rely heavily on the advice of others working in the policy space, weighted by how much I trust their judgment. I think this is basically a reasonable algorithm to follow, given that (a) they have a lot of context that I don’t, and (b) I think the downside risks of poorly-executed policy projects have spillover effects to other policy projects, which means that others in policy are genuine stakeholders in these decisions.
That being said, I think there’s a surprising amount of disagreement in what projects others in policy think are good, so I think the particular choice of advisors here makes a big difference.
I do think projects interacting with policymakers have substantial room for downside, including:
Pushing policies that are harmful
Making key issues partisan
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project
Thank you Abergal, I hope my critique is helpful. I mean it to be constructive.
I don’t think I disagree with anything at all that you wrote here!! So glad we are mostly on the same page.
(In fact you you suggest “we also differ in our views of the upsides of some of this work” and I am not sure that is the case. I am fairly sceptical of much of it, especially more AI focused stuff.)
I still expect the main disagreements are on:
Managing downside risks. I worry that if we as a community don’t put time and effort into understanding how to mitigate downside risk well we will make mistakes. Mostly I worry that we are pushing away anyone who could do direct(phase 2) type work, but also that we make some projects higher risk by denying funding, and that if you have a range of experts with veto power and extremely different views then perhaps between them every possible idea is vetoed.
Transparency. I have always been more in favour of transparency than others. I think it now being public that LTFF has “a higher bar for funding organizations” and “much higher bar for projects that involve interacting with policymakers” [paraphrased] is helpful. Also I know giving feedback is a perk not even an expected action of a funder and that the LTFF is better than it than most, but would try to have a transparency by default for expert advisers etc.
If it is helpful happy to discuss either of these points further.
Also super great to hear that you have made three grants in the last year to projects that interact with policymakers!! Even if this is “very small” compared to other grants it is more than previous years. I look forward to hearing what some of them are if there is a future write-up. :-)
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.
and that if you have a range of experts with veto power and extremely different views then perhaps between them every possible idea is vetoed
Just to be clear, the LTFF generally has a philosophy of funding things if at least one fund member is excited about them, and a high bar for a fund member to veto a grant that someone else is championing, so I don’t think “every possible idea getting vetoed” is a very likely failure mode of the LTFF.
The idea that was suggested to me by an EA policy person was not that other fund members veto but that external experts veto.
The hypothetical story is that a fund manager who is not an expert in policy who is struggling to evaluate a policy grant that they worry is high-risk might invite a bunch of external experts to give a view and have a veto and, given the risk, if any of those experts veto they might choose not to fund it.
This then goes wrong if and experts veto for very different reasons (more likely if “there’s a surprising amount of disagreement in what projects [experts] think are good”). In the worse case it could be that almost every policy grant gets vetoed by one expert or another and none get funded. This could significantly raise the bar and it might take a while for a fund manager to notice.
Honestly I have no idea if this actually happens or ever works like this. If this is not how things work then great! Mostly trying to bring out into the open the various tools that people can use to evaluate more risky projects and to flag the potential uses and also downsides for discussion.
Oh, hmm, I think we rarely request feedback from more than one or two experts, so I would be somewhat surprised if this has a large effect. But yeah, definitely in biorisk and policy, I think if the expert we ping tends to have a negative take, we probably don’t fund it (and now that you say it, it does feel a bit more like we are asking more experts on policy grants than other grants, so there might be some of the effect that you are describing going on).
If these experts regularly have a large impact on these decisions, that’s an argument for transparency about them. This is a factor that could of course be outweighed by other considerations (ability to give frank advice, confidentiality, etc). Perhaps might be worth asking them how they’d feel about being named (with no pressure attached, obviously).
Also, can one volunteer as an expert? I would—and I imagine others (just on this post, perhaps Ian and Sam?) would too.
I don’t think hiring for this is easy. You need aligned people with good judgement, significant policy experience, and counterfactual impact that is equal to or worse impact than working for LTFF. Plus, ideally good communication ability and a bit of creativity too.
Sure, but if there are 4 people out there with CS backgrounds who fit the bill there are probably a few without who do too.
The other thing is the idea that “policy” is this general thing seems a little off to me. Someone who knows a thing or two about Congress may not have context or network to evaluate something aimed at some corner of the executive branch, to say nothing of evaluating a policy proposal oriented towards another country.
Was this always true? I (perhaps arrogantly) thought that there was a push for greater grantmaker discretion after my comment here (significantly before I joined LTFF), though it was unclear if my comment had any causal influence
I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time.
Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is “connected to EA funders approach to minimising risks.” You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is “our views of the upsides of some of this work”. And then I replied to you but did not discuss that point at all!!
So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was.
– –
Some views I have (at least from a UK perspective, cannot comment on US):
DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke’s post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are good ideas on AI too like: government to becautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want.
AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to be small, at least for a while. (That said I think most current endeavours to reduce AI x-risks or other x-risks are low probability of success, I put most weight on technical research but even that is highly uncertain). I expect we agree there.
POSSIBLE DISAGREEMENT: I think AI is not the only risk. I didn’t bring up AI at all in my post and none of the examples I know of that applied to the LTFF were AI focused. Yet you and Habryka bring up AI multiple times in your responses. I think AI risks are ~3x more likely than bio and unknown unknow risks (depending on the timeframe) and I think bio and unknown unknow risks are ~6x easier to address though policy work (in the UK). Maybe this is a crux. If the LTFF thinks AI is 1000x more pressing than other risks then maybe this is why you do not value policy work. Could discuss this more if helpful. (If this is true would be great if the LTFF was public about this.)
NEUTRAL:I think policy change is pretty tractable and cheap. The APPG for Future Generations seems to consistently drive multiple policy changes, with one every 9 months / £35k spent (see here). I don’t think you implied you had any views on this.
– – Anyway I hope that is useful and maybe gets a bit more at some of the cruxes. Would be keen to hear views on if you think any of this is correct, even if you don’t have time to respond in depth to any of these points.
Thank you so much for engaging and listening to my feedback – good luck with future grantmaking!!!
I was just wondering if you could share more concrete examples of “taking up the space” risks. We’re facing some choices around this in Australia at the moment and I want to make sure we’ve considered all downsides of uniting under a shared vision. Are the risks of “taking up the space” mainly:
Less agile—multiple small organizations may be able to work faster
Centralized risk—if one organization among multiple small organizations faces an issue (e.g. brand damage) this is less likely to affect the other organizations
Less diversity of thought—there’s value in taking different approaches to problems and having multiple small organizations means were less at less risk of groupthink or quashing diversity of thought
I’d be keen to know if there are others we may not have considered.
Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.
Some clarifications on what we have and haven’t funded:
I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
I think we regularly fund “grants that work on policy research” – e.g., we’ve funded the Centre for Governance of AI, and regularly fund individuals who are doing PhDs or otherwise working on AI governance research.
I think we’ve funded a very small number of grants that involve interactions with policymakers – I can think of three such grants in the last year, two of which were for new projects. (In one case, the grantee has requested that we not report the grant publicly).
Responding to the rest of the post:
I think it’s roughly correct that I have a pretty high bar for funding projects that interact with policymakers, and I endorse this policy. (I don’t want to speak for the Long-Term Future Fund as a whole, because it acts more like a collection of fund managers than a single entity, but I suspect many others on the fund also have a high bar, and that my opinion in particular has had a big influence on our past decisions.)
Some other things in your post that I think are roughly true:
Previous experience in policy has been an important factor in my evaluations of these grants, and all else equal I think I am much more likely to fund applicants who are more senior (though I think the “20 years experience” bar is too high).
There have been cases where we haven’t funded projects (more broadly than in policy) because an individual has given us information about or impressions of them that led us to think the project would be riskier or less impactful than we initially believed, and we haven’t shared the identity or information with the applicant to preserve the privacy of the individual.
We have a higher bar for funding organizations than other projects, because they are more likely to stick around even if we decide they’re not worth funding in the future.
When evaluating the more borderline grants in this space, I often ask and rely heavily on the advice of others working in the policy space, weighted by how much I trust their judgment. I think this is basically a reasonable algorithm to follow, given that (a) they have a lot of context that I don’t, and (b) I think the downside risks of poorly-executed policy projects have spillover effects to other policy projects, which means that others in policy are genuine stakeholders in these decisions.
That being said, I think there’s a surprising amount of disagreement in what projects others in policy think are good, so I think the particular choice of advisors here makes a big difference.
I do think projects interacting with policymakers have substantial room for downside, including:
Pushing policies that are harmful
Making key issues partisan
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project
I suspect we also differ in our views of the upsides of some of this work– a lot of the projects we’ve rejected have wanted to do AI-focused policy work, and I tend to think that we don’t have very good concrete asks for policymakers in this space.
Thank you Abergal, I hope my critique is helpful. I mean it to be constructive.
I don’t think I disagree with anything at all that you wrote here!! So glad we are mostly on the same page.
(In fact you you suggest “we also differ in our views of the upsides of some of this work” and I am not sure that is the case. I am fairly sceptical of much of it, especially more AI focused stuff.)
I still expect the main disagreements are on:
Managing downside risks. I worry that if we as a community don’t put time and effort into understanding how to mitigate downside risk well we will make mistakes. Mostly I worry that we are pushing away anyone who could do direct(phase 2) type work, but also that we make some projects higher risk by denying funding, and that if you have a range of experts with veto power and extremely different views then perhaps between them every possible idea is vetoed.
Transparency. I have always been more in favour of transparency than others. I think it now being public that LTFF has “a higher bar for funding organizations” and “much higher bar for projects that involve interacting with policymakers” [paraphrased] is helpful. Also I know giving feedback is a perk not even an expected action of a funder and that the LTFF is better than it than most, but would try to have a transparency by default for expert advisers etc.
If it is helpful happy to discuss either of these points further.
Also super great to hear that you have made three grants in the last year to projects that interact with policymakers!! Even if this is “very small” compared to other grants it is more than previous years. I look forward to hearing what some of them are if there is a future write-up. :-)
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
100%
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
Thanks for the response!
That’s fair, and I should also be clear that I’m less familiar with LTFF’s grantmaking than some others in the EA universe.
Oh, I totally agree that the kind of risk analysis I mentioned is not costless, and for EA Funds in particular it seems like too much to expect. My main point is that in the absence of it, it’s not necessarily an optimal strategy to substitute an extreme version of the precautionary principle instead.
Overall, I agree that judging policy/institution-focused projects primarily based on upside makes sense.
Just to be clear, the LTFF generally has a philosophy of funding things if at least one fund member is excited about them, and a high bar for a fund member to veto a grant that someone else is championing, so I don’t think “every possible idea getting vetoed” is a very likely failure mode of the LTFF.
The idea that was suggested to me by an EA policy person was not that other fund members veto but that external experts veto.
The hypothetical story is that a fund manager who is not an expert in policy who is struggling to evaluate a policy grant that they worry is high-risk might invite a bunch of external experts to give a view and have a veto and, given the risk, if any of those experts veto they might choose not to fund it.
This then goes wrong if and experts veto for very different reasons (more likely if “there’s a surprising amount of disagreement in what projects [experts] think are good”). In the worse case it could be that almost every policy grant gets vetoed by one expert or another and none get funded. This could significantly raise the bar and it might take a while for a fund manager to notice.
Honestly I have no idea if this actually happens or ever works like this. If this is not how things work then great! Mostly trying to bring out into the open the various tools that people can use to evaluate more risky projects and to flag the potential uses and also downsides for discussion.
Oh, hmm, I think we rarely request feedback from more than one or two experts, so I would be somewhat surprised if this has a large effect. But yeah, definitely in biorisk and policy, I think if the expert we ping tends to have a negative take, we probably don’t fund it (and now that you say it, it does feel a bit more like we are asking more experts on policy grants than other grants, so there might be some of the effect that you are describing going on).
If these experts regularly have a large impact on these decisions, that’s an argument for transparency about them. This is a factor that could of course be outweighed by other considerations (ability to give frank advice, confidentiality, etc). Perhaps might be worth asking them how they’d feel about being named (with no pressure attached, obviously).
Also, can one volunteer as an expert? I would—and I imagine others (just on this post, perhaps Ian and Sam?) would too.
or funders could, you know, always hire more fund managers who have policy experience!
I don’t think hiring for this is easy. You need aligned people with good judgement, significant policy experience, and counterfactual impact that is equal to or worse impact than working for LTFF. Plus, ideally good communication ability and a bit of creativity too.
Sure, but if there are 4 people out there with CS backgrounds who fit the bill there are probably a few without who do too.
The other thing is the idea that “policy” is this general thing seems a little off to me. Someone who knows a thing or two about Congress may not have context or network to evaluate something aimed at some corner of the executive branch, to say nothing of evaluating a policy proposal oriented towards another country.
Hmm I think this makes the problem harder, not easier.
Was this always true? I (perhaps arrogantly) thought that there was a push for greater grantmaker discretion after my comment here (significantly before I joined LTFF), though it was unclear if my comment had any causal influence
I think we always had a culture of it, but I do think your comment had a causal influence on us embedding that more into the decision-making process.
Hi Abergal,
I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time.
Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is “connected to EA funders approach to minimising risks.” You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is “our views of the upsides of some of this work”. And then I replied to you but did not discuss that point at all!!
So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was.
– –
Some views I have (at least from a UK perspective, cannot comment on US):
DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke’s post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are good ideas on AI too like: government to be cautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want.
AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to be small, at least for a while. (That said I think most current endeavours to reduce AI x-risks or other x-risks are low probability of success, I put most weight on technical research but even that is highly uncertain). I expect we agree there.
POSSIBLE DISAGREEMENT: I think AI is not the only risk. I didn’t bring up AI at all in my post and none of the examples I know of that applied to the LTFF were AI focused. Yet you and Habryka bring up AI multiple times in your responses. I think AI risks are ~3x more likely than bio and unknown unknow risks (depending on the timeframe) and I think bio and unknown unknow risks are ~6x easier to address though policy work (in the UK). Maybe this is a crux. If the LTFF thinks AI is 1000x more pressing than other risks then maybe this is why you do not value policy work. Could discuss this more if helpful. (If this is true would be great if the LTFF was public about this.)
NEUTRAL: I think policy change is pretty tractable and cheap. The APPG for Future Generations seems to consistently drive multiple policy changes, with one every 9 months / £35k spent (see here). I don’t think you implied you had any views on this.
– –
Anyway I hope that is useful and maybe gets a bit more at some of the cruxes. Would be keen to hear views on if you think any of this is correct, even if you don’t have time to respond in depth to any of these points.
Thank you so much for engaging and listening to my feedback – good luck with future grantmaking!!!
Thank you for sharing, Abergal!
I was just wondering if you could share more concrete examples of “taking up the space” risks. We’re facing some choices around this in Australia at the moment and I want to make sure we’ve considered all downsides of uniting under a shared vision. Are the risks of “taking up the space” mainly:
Less agile—multiple small organizations may be able to work faster
Centralized risk—if one organization among multiple small organizations faces an issue (e.g. brand damage) this is less likely to affect the other organizations
Less diversity of thought—there’s value in taking different approaches to problems and having multiple small organizations means were less at less risk of groupthink or quashing diversity of thought
I’d be keen to know if there are others we may not have considered.