Why should we do any policy stuff? Isn’t all policy work a waste of time? Didn’t [random unconnected policy thing] not work? Etc.
This was a conversation with me, and sadly strikes me as a strawman of the questions asked (I could recount the conversation more in-full, though that feels a bit privacy violating).
Of course on the LTFF I do not take it as a given that any specific approach to improving the long term future will work, and I question actively whether any broad set of approaches has any chance to be competitive with other things that we can do. It’s important that the LTFF is capable of coming to the conclusion that policy work is in-general unlikely to be competitive, and just because there are already a bunch of people working in policy, doesn’t mean I no longer question whether the broad area might be an uphill battle.
I personally am probably the person on the LTFF most skeptical of policy work, especially of work that aims to do policy-analysis embedded within government institutions, or that looks like it’s just a generic “let’s try to get smarter people into government” approach. I’ve seen this fail a good number of times, and many interviews I’ve done with policy people suggests to me that many people report finding it very hard to think clearly when embedded within government institutions. I also think generic power-seeking behavior where we try to “get our guys into government” is somewhat uncooperative and also has detrimental epistemic effects on the community.
Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I’ve found from discussing my cruxes here for many hundreds of hours with others is that people’s models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work).
The questions I actually asked were of the type “how do you expect to overcome these difficult obstacles that seem to generally make policy work ineffective, that seem reported by many people in policy?”. I don’t think we’ve had really any policy successes with regards to the Long Term Future, so if a project does not have compelling answers to this kind of question, I am usually uninterested in funding it, though again, others on the LTFF have a pretty different set of cruxes.
I often ask the same types of question with regards to AI Alignment research: “it seems like we haven’t really found much traction with doing AI Alignment research, and it seems pretty plausible to me that we don’t really know how to make progress in this domain. Why do you think we can make progress here?”. I do have a higher prior on at least some AI Alignment research working, and also think the downside risk from marginal bad AI Alignment research is less than from marginal policy advocacy or intervention, so my questions tend to be a bit more optimistically framed, or I can contribute more of the individual model pieces.
This was a conversation with me, and sadly strikes me as a strawman of the questions asked (I could recount the conversation more in-full, though that feels a bit privacy violating). … I often ask the same types of question with regards to AI Alignment research
Apologies if I misrepresented this in some way (if helpful and if you have a recording or notes happy for things I said and you said to be public). I have not had this kind of questioning on any other funding applications and it felt very strange to me. I said in an email to Caleb (EA funds) recently that “it would surprise me a lot if people applying for grants to do say AI safety technical work got an hour of [this type]”. So perhaps count me as surprised. If this kind of questioning is just an idoscyratic Habryka tool for getting to better grips with applicants of different types then I am happy with it. Will edit the post.
Of course on the LTFF I do not take it as a given that any specific approach to improving the long term future will work
I guess it depends how specific you are being. Obviously I don’t think it should be taken as given that “specific think tank plan x” would be good, but I do think it is reasonable for a fund to take it as given that at a high level “policy work” would be good. And if the LTFF does not think this then why does the LTFF actively outreach to policy people to get them to apply?
(I recognise there may be a difference here between you Habryka and the rest of LTFF, as you say you are more sceptical than others)
I personally am probably the person on the LTFF most skeptical of policy work, especially of work that aims to do policy-analysis embedded within government institutions, or that looks like it’s just a generic “let’s try to get smarter people into government” approach.
Now it is my turn to claim a strawman. I have never applied to the LTFF with a plan anything close to a “let’s try to get smarter people into government” approach. Nor were any of the 5 applications to the LTFF I am aware of anything like this approach.
FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).
I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.
FWIW I think if this is just how Habryka works then that is totally fine from my point of view. If it helps him make good decisions then great.
(From the unusualness of the questioning approach and the focus on “why policy” I took it to be a sign that the LTFF was very sceptical of policy change as an approach compared to other approaches, but I may have been mistaken in making this assumption based on this evidence.)
I guess it depends how specific you are being. Obviously I don’t think it should be taken as given that “specific think tank plan x” would be good, but I do think it is reasonable for a fund to take it as given that at a high level “policy work” would be good. And if the LTFF does not think this then why does the LTFF actively outreach to policy people to get them to apply?
I don’t think we should take it as a given! I view figuring out questions like this as most of our job, so of course I don’t want us to have an institutional commitment to a certain answer in this domain.
And if the LTFF does not think this then why does the LTFF actively outreach to policy people to get them to apply?
In order to believe that something could potentially be furthered by someone, or that it has potential, I don’t think I have to take it as a given that work in that general area “would be good”.
I also think it’s important to notice that the LTFF page only lists “policy research” and “advocacy”, and doesn’t explicitly list “policy advocacy” or “policy work” more broadly (see Asya’s clarification below). I don’t think we currently actively solicit a lot of policy work for the LTFF, though maybe other fund managers who are more optimistic about that type of work have done more soliciting.
And separately, the page of course reflects something much closer to the overall funds view (probably with a slant towards Asya, since she is head of the LTFF), and this is generally true for our outreach, and I think it’s good and valuable to have people with a diversity of views on the LTFF (and for people who are more skeptical of certain work to talk to the relevant grantees).
Now it is my turn to claim a strawman. I have never applied to the LTFF with a plan anything close to a “let’s try to get smarter people into government” approach. Nor were any of the 5 applications to the LTFF I am aware of anything like this approach.
Sorry! Seems like this is just me communicating badly. I did not intend to imply (though I can now clearly see how one might read it as such) that your work in-particular falls into this category. I was trying to give some general reasons for why I am skeptical of a lot of policy work (I think only some of these reasons apply to your work). I apologize for the bad communication here.
I also think it’s important to notice that the LTFF page only lists “policy research” and “advocacy”, and doesn’t explicitly list “policy advocacy” or “policy work” more broadly
The page says the LTFF is looking to fund projects on “reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects” which a casual reader could take to include policy advocacy. If the aim of the page is to avoid giving the impression that policy advocacy is something the LTFF actively looks to fund then I think it could do a better job.
I don’t think we currently actively solicit a lot of policy work for the LTFF,
Maybe this has stopped now. Last posts I saw was Dec 2021 from an EA Funds staff who has now left (here). Posts said things like: “we’d be excited to consider grants related to policy and politics. We fund all kinds of projects” (here). It is plausible to me that things like that were giving people the wrong impressions and the LTFFs willingness to fund certain projects.
– –
I don’t think we should take it as a given! I view figuring out questions like this as most of our job, …
Fair enough. That seems like a reasonable approach too and I hope it is going well and you are learning a lot!!
– –
Sorry! Seems like this is just me communicating badly
No worries. Sorry too for any ways I have misrepresented our past interactions. Keep being wonderful <3
Thanks for posting this comment, I thought it gave really useful perspective.
“I don’t think we’ve had really any policy successes with regards to the Long Term Future”
This strikes me as an odd statement. If you’re talking about the LTF fund, or EA long-termism, it doesn’t seem like much policy work has been funded.
If you’re talking more broadly, wouldn’t policy wins like decreasing the amount of lead being emitted into the atmosphere (which has negative effects on IQ and health generally) be a big policy win for the long term future?
This strikes me as an odd statement. If you’re talking about the LTF fund, or EA long-termism, it doesn’t seem like much policy work has been funded.
Huh, why do you think that? CSET was Open Phil’s largest grant to date, and I know of at least another $20MM+ in policy projects that have been funded.
Sadly, I think a lot of policy grants are announced less publicly, because publicity is usually harmful for policy projects or positions (which I think is at least some evidence of them being at least somewhat adversarial/non-cooperative, which is one of the reasons why I have a somewhat higher prior against policy projects). Approximately all policy applications to the LTFF end up requesting that we do not publish a public writeup on them, so we often refer them to private funders if we think they are a good idea.
“I don’t think we’ve had really any policy successes with regards to the Long Term Future”
Bias view incoming …. :
I think LTFF’s only (public) historical grant for policy advocacy, to the APPG for Future Generations, has led to better policy in the UK, in particular on risk management. For discussions on this see impact reports here and here, independent reviews here and here, and criticism here.
Additionally I think CLTR has been doing impactful long-term focused policy work in the UK.
If you’re talking more broadly, wouldn’t policy wins like decreasing the amount of lead being emitted into the atmosphere (which has negative effects on IQ and health generally) be a big policy win for the long term future?
Yeah, I think this is definitely a candidate for a great intervention, though I think importantly it wasn’t the result of someone entering the policy space with a longtermist mindset.
If someone had a concrete policy they wanted to push for (or a plan for discovering policies) of that magnitude, then I would likely be excited about funding it, though I would still be somewhat worried how likely it would be to differentially accelerate development of dangerous technologies vs. increase humanities ability to navigate rapid technological change (since most risk to the future is anthropogenic, I am generally skeptical of interventions that just speed up technological progress across the board), but my sense is abating lead poisoning looks better than most other things on this dimension.
Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I’ve found from discussing my cruxes here for many hundreds of hours with others is that people’s models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work).
Surely something akin to this critique can also be levied at e.g. alignment research.
Oh, sorry, I didn’t intend this at all as a critique. I intended this as a way to communicate that I don’t think I am that alone in thinking that most policy projects are pretty unlikely to be helpful.
Sorry “critique” was poor choice of words on my part. I just meant “most LT plans will fail, and most LT plans that at least some people you respect like will on an inside view certainly fail” is just the default for trying to reason well on the frontier of LT stuff. But I’m worried that the framing will sound like you meant it narrowly for policy. Also, I’m worried your implied bar for funding policy is higher than what LTFF people (including yourself) actually use.
Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
This was a conversation with me, and sadly strikes me as a strawman of the questions asked (I could recount the conversation more in-full, though that feels a bit privacy violating).
Of course on the LTFF I do not take it as a given that any specific approach to improving the long term future will work, and I question actively whether any broad set of approaches has any chance to be competitive with other things that we can do. It’s important that the LTFF is capable of coming to the conclusion that policy work is in-general unlikely to be competitive, and just because there are already a bunch of people working in policy, doesn’t mean I no longer question whether the broad area might be an uphill battle.
I personally am probably the person on the LTFF most skeptical of policy work, especially of work that aims to do policy-analysis embedded within government institutions, or that looks like it’s just a generic “let’s try to get smarter people into government” approach. I’ve seen this fail a good number of times, and many interviews I’ve done with policy people suggests to me that many people report finding it very hard to think clearly when embedded within government institutions. I also think generic power-seeking behavior where we try to “get our guys into government” is somewhat uncooperative and also has detrimental epistemic effects on the community.
Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I’ve found from discussing my cruxes here for many hundreds of hours with others is that people’s models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work).
The questions I actually asked were of the type “how do you expect to overcome these difficult obstacles that seem to generally make policy work ineffective, that seem reported by many people in policy?”. I don’t think we’ve had really any policy successes with regards to the Long Term Future, so if a project does not have compelling answers to this kind of question, I am usually uninterested in funding it, though again, others on the LTFF have a pretty different set of cruxes.
I often ask the same types of question with regards to AI Alignment research: “it seems like we haven’t really found much traction with doing AI Alignment research, and it seems pretty plausible to me that we don’t really know how to make progress in this domain. Why do you think we can make progress here?”. I do have a higher prior on at least some AI Alignment research working, and also think the downside risk from marginal bad AI Alignment research is less than from marginal policy advocacy or intervention, so my questions tend to be a bit more optimistically framed, or I can contribute more of the individual model pieces.
Thank you Habryka. Great to get your views.
Apologies if I misrepresented this in some way (if helpful and if you have a recording or notes happy for things I said and you said to be public). I have not had this kind of questioning on any other funding applications and it felt very strange to me. I said in an email to Caleb (EA funds) recently that “it would surprise me a lot if people applying for grants to do say AI safety technical work got an hour of [this type]”. So perhaps count me as surprised. If this kind of questioning is just an idoscyratic Habryka tool for getting to better grips with applicants of different types then I am happy with it. Will edit the post.
I guess it depends how specific you are being. Obviously I don’t think it should be taken as given that “specific think tank plan x” would be good, but I do think it is reasonable for a fund to take it as given that at a high level “policy work” would be good. And if the LTFF does not think this then why does the LTFF actively outreach to policy people to get them to apply?
(I recognise there may be a difference here between you Habryka and the rest of LTFF, as you say you are more sceptical than others)
Now it is my turn to claim a strawman. I have never applied to the LTFF with a plan anything close to a “let’s try to get smarter people into government” approach. Nor were any of the 5 applications to the LTFF I am aware of anything like this approach.
FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).
I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.
FWIW I think if this is just how Habryka works then that is totally fine from my point of view. If it helps him make good decisions then great.
(From the unusualness of the questioning approach and the focus on “why policy” I took it to be a sign that the LTFF was very sceptical of policy change as an approach compared to other approaches, but I may have been mistaken in making this assumption based on this evidence.)
I don’t think we should take it as a given! I view figuring out questions like this as most of our job, so of course I don’t want us to have an institutional commitment to a certain answer in this domain.
In order to believe that something could potentially be furthered by someone, or that it has potential, I don’t think I have to take it as a given that work in that general area “would be good”.
I also think it’s important to notice that the LTFF page only lists “policy research” and “advocacy”, and doesn’t explicitly list “policy advocacy” or “policy work” more broadly (see Asya’s clarification below). I don’t think we currently actively solicit a lot of policy work for the LTFF, though maybe other fund managers who are more optimistic about that type of work have done more soliciting.
And separately, the page of course reflects something much closer to the overall funds view (probably with a slant towards Asya, since she is head of the LTFF), and this is generally true for our outreach, and I think it’s good and valuable to have people with a diversity of views on the LTFF (and for people who are more skeptical of certain work to talk to the relevant grantees).
Sorry! Seems like this is just me communicating badly. I did not intend to imply (though I can now clearly see how one might read it as such) that your work in-particular falls into this category. I was trying to give some general reasons for why I am skeptical of a lot of policy work (I think only some of these reasons apply to your work). I apologize for the bad communication here.
The page says the LTFF is looking to fund projects on “reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects” which a casual reader could take to include policy advocacy. If the aim of the page is to avoid giving the impression that policy advocacy is something the LTFF actively looks to fund then I think it could do a better job.
Maybe this has stopped now. Last posts I saw was Dec 2021 from an EA Funds staff who has now left (here). Posts said things like: “we’d be excited to consider grants related to policy and politics. We fund all kinds of projects” (here). It is plausible to me that things like that were giving people the wrong impressions and the LTFFs willingness to fund certain projects.
– –
Fair enough. That seems like a reasonable approach too and I hope it is going well and you are learning a lot!!
– –
No worries. Sorry too for any ways I have misrepresented our past interactions. Keep being wonderful <3
Thanks for posting this comment, I thought it gave really useful perspective.
This strikes me as an odd statement. If you’re talking about the LTF fund, or EA long-termism, it doesn’t seem like much policy work has been funded.
If you’re talking more broadly, wouldn’t policy wins like decreasing the amount of lead being emitted into the atmosphere (which has negative effects on IQ and health generally) be a big policy win for the long term future?
I think this is false, e.g. a reasonable subset of Open Phil’s Transformative AI risks grantmaking is on policy.
Huh, why do you think that? CSET was Open Phil’s largest grant to date, and I know of at least another $20MM+ in policy projects that have been funded.
Sadly, I think a lot of policy grants are announced less publicly, because publicity is usually harmful for policy projects or positions (which I think is at least some evidence of them being at least somewhat adversarial/non-cooperative, which is one of the reasons why I have a somewhat higher prior against policy projects). Approximately all policy applications to the LTFF end up requesting that we do not publish a public writeup on them, so we often refer them to private funders if we think they are a good idea.
I guess I was just wrong, I hadn’t looked into it much!
Bias view incoming …. :
I think LTFF’s only (public) historical grant for policy advocacy, to the APPG for Future Generations, has led to better policy in the UK, in particular on risk management. For discussions on this see impact reports here and here, independent reviews here and here, and criticism here.
Additionally I think CLTR has been doing impactful long-term focused policy work in the UK.
Yeah, I think this is definitely a candidate for a great intervention, though I think importantly it wasn’t the result of someone entering the policy space with a longtermist mindset.
If someone had a concrete policy they wanted to push for (or a plan for discovering policies) of that magnitude, then I would likely be excited about funding it, though I would still be somewhat worried how likely it would be to differentially accelerate development of dangerous technologies vs. increase humanities ability to navigate rapid technological change (since most risk to the future is anthropogenic, I am generally skeptical of interventions that just speed up technological progress across the board), but my sense is abating lead poisoning looks better than most other things on this dimension.
An offshoot of lead emission in the atmosphere might be the work being done at LEEP (Lead Exposure Elimination Project) https://forum.effectivealtruism.org/posts/ktN29JneoQCYktqih/seven-more-learnings-from-leep
(I work for the LTFF)
Surely something akin to this critique can also be levied at e.g. alignment research.
Oh, sorry, I didn’t intend this at all as a critique. I intended this as a way to communicate that I don’t think I am that alone in thinking that most policy projects are pretty unlikely to be helpful.
Sorry “critique” was poor choice of words on my part. I just meant “most LT plans will fail, and most LT plans that at least some people you respect like will on an inside view certainly fail” is just the default for trying to reason well on the frontier of LT stuff. But I’m worried that the framing will sound like you meant it narrowly for policy. Also, I’m worried your implied bar for funding policy is higher than what LTFF people (including yourself) actually use.
Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
Yeah, I think being less conceptually confused is definitely part of it.