I think this is a good idea, but would benefit greatly from narrowing the scope greatly, and finding what answer are already known before brainstorming what to investigate. Given that, I think you’d benefit from some of the basic works on policy analysis, rather than policy engagement, to see what is already understood. I’ll specifically point to Bardach’s A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving as a good place to start, followed by Weimar and Vining’s book.
Thanks David! Appreciate you having a look and for the resources.
Definitely agree, the scope will end up being much narrower. We wanted to keep this initial stage really broad—hoping to capture as many interesting and useful questions as possible. Then next step we’re going to whittle it down to the highest priority questions, essentially those that would be valuable for the field to have insights into but that haven’t yet been addressed by existing literature or work. Hope to get your thoughts at that stage as well!
Sounds great—and my guess is that lots of the most valuable work will be in “how can we use technique X for EA” for a variety of specific tools, rather than developing new methods, and will require deep dives into specifics.
Thanks for this. I’ll fire off a bunch of quickly written quick reactions in this comment and in replies. Let me know if you’d like me to elaborate on anything.
Some of these questions might relate to policy in general, for example: ‘What are the steps in a policy process?’
Some of the questions might relate to risk policy, for example: ‘How is policy made under uncertainty and risk?’
Some questions might relate to specific X/GCRs, for example: ‘how can policy windows be useful to the X/GCR field in engaging policy?’
I think that third point conflates two separate things: questions related to specific x-risks/GCRs (e.g., how can policy windows be useful to the AI risk field in engaging policy?) and questions related to x-risk/GCR policy in general (but more narrowly than risk policy in general—risk policy would also include policies solely relevant to smaller scale risks). So I’d suggest a four part breakdown, like a Venn diagram, narrowing in from policy in general, to risk policy, to x-risk/GCR policy in general, to policy on specific x-risks/GCRs.
I’d also suggest adding a bucket for policy that’s especially relevant/similar to x-risks/GCR policy even though it’s not explicitly about x-risks/GCRs or even about risk (or where the primary reason it’s relevant isn’t because it’s about risk). I think you’ve correctly identified that a notable aspect of x-risk/GCR policymaking is that it must contend with substantial risk and uncertainty, and thus that we can learn useful lessons from otherpolicymaking efforts that must contend with substantial risk and uncertainty, even if they aren’t explicitly or closely related to x-risks/GCRs. But I think other notable aspects, which likewise allow for learning from other policy areas, include:
Huge externalities within and between borders and generations
GCR reduction is in most cases like a global public good, and x-risk reduction is a transgenerational global public good
So we can learn from e.g. climate change policymaking for reasons unrelated to climate change itself being a GCR and involving risk and uncertainty
Presumably we can also learn from things like policymaking to prevent ozone depletion
Key role of emerging technologies
Key role of great power relations and perhaps “race” dynamics (including but not limited to arms races)
I guess this also suggests the high-level question “What is distinctive and challenging about x-risk/GCR policymaking, and what other areas can we therefore learn from?” (Maybe you already covered this and I missed it.) One previous research effort along those lines (but focused on AI specifically) which I found interesting was this from MIRI in 2013.
I might add something about “What are the actions funders can take to influence policy outcomes? How well does each tend to work? In what situations are they most appropriate? How can they best be implemented?”
Example actions include funding advocacy campaigns, funding research, funding track 1.5/track 2 dialogues, and funding capacity building stuff (like fellowships that create the next batch of experts).
It might be that some actions tend to be more effective overall, some are more effective for particular risks, and some are more effective during certain types of policy windows while others are the best move when no particular window is open.
Messaging: Who are the best ‘messengers’ for highlighting X/GCRs to policymakers? How can they best be supported?
I think a related idea would be “What are the best ways of assessing what policy proposals will be tractable and what framings are best? How good are they? ” E.g., how much should we invest in polling, message testing, talking to experts/grantmakers/campaigners/etc., or Tetlock-style forecasting to assess what policies might get public and policymaker support and what framings might best support them?
(We could also of course ask “What policy proposals will be tractable and what framings are best?”, and use those methods to answer it, but then that’s not meta policy research.)
I’d probably add some things related to forecasting and maybe foresight, scenario planning, horizon scanning, and maybe red-teaming (of ideas).
I think you could this as similar to how you highlight the science-policy interface (likewise, forecasting should in theory be an important input into policymaking) or similar to how you and I respectively highlight risk+uncertainty and externalities+emerging tech (one distinctive thing about x-risk/GCR policy is how relevant forecasting etc is).
A related thing is “trying to act well in advance of a problem occurring or even its shape or importance being clear to most people”. One case study often mentioned in this context is Leo Szilard and nuclear weapons (e.g. here). See also https://forum.effectivealtruism.org/tag/long-range-forecasting One could in theory look into how often people have attempted to influence policy in that sort of way or on that sort of issue, when and how it’s gone well or poorly, etc.
You could check out this draft research agenda of mine for additional question ideas. (I now you’ve already seen it, but perhaps it’s worth glancing at it again in light of this particular project, and I’m also putting the link here for other people’s potential benefit.)
Thanks Michael, all great points and really useful additions. I’ve added those in. Your draft research agenda was definitely inspiration for this work, though I realise I hadn’t looked at it in a while, so thanks for re-sharing. It also shows that each meta-policy question can be broken down into all sorts of mini-meta policy questions. I’ll be keen to speak with you about how you’ve approached prioritising across them all.
I’ll be keen to speak with you about how you’ve approached prioritising across them all.
Unfortunately the summary on that is that I haven’t really done any further work developing the agenda, prioritising across its questions, or actually working on the questions. (Where “further work” means “since initially writing and sharing the agenda.)
That said:
Some RP interns did produce some outputs related to the agenda:
Section 4 of the Computational Process Studies paper contains research directions we think are promising and can be investigated with other methods, too. The paper was accepted by Complexity and is currently undergoing revisions—the reviewers liked our summary and thrust, just the maths is too basic for the audience, so we’re expanding the model. Section 1 of our Long-term Institutional Fit working paper (update in the works, too) also ends with concrete questions we’d like answered.
I think this is a good idea, but would benefit greatly from narrowing the scope greatly, and finding what answer are already known before brainstorming what to investigate. Given that, I think you’d benefit from some of the basic works on policy analysis, rather than policy engagement, to see what is already understood. I’ll specifically point to Bardach’s A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving as a good place to start, followed by Weimar and Vining’s book.
Thanks David! Appreciate you having a look and for the resources.
Definitely agree, the scope will end up being much narrower. We wanted to keep this initial stage really broad—hoping to capture as many interesting and useful questions as possible. Then next step we’re going to whittle it down to the highest priority questions, essentially those that would be valuable for the field to have insights into but that haven’t yet been addressed by existing literature or work. Hope to get your thoughts at that stage as well!
Sounds great—and my guess is that lots of the most valuable work will be in “how can we use technique X for EA” for a variety of specific tools, rather than developing new methods, and will require deep dives into specifics.
Thanks for this. I’ll fire off a bunch of quickly written quick reactions in this comment and in replies. Let me know if you’d like me to elaborate on anything.
I think that third point conflates two separate things: questions related to specific x-risks/GCRs (e.g., how can policy windows be useful to the AI risk field in engaging policy?) and questions related to x-risk/GCR policy in general (but more narrowly than risk policy in general—risk policy would also include policies solely relevant to smaller scale risks). So I’d suggest a four part breakdown, like a Venn diagram, narrowing in from policy in general, to risk policy, to x-risk/GCR policy in general, to policy on specific x-risks/GCRs.
I’d also suggest adding a bucket for policy that’s especially relevant/similar to x-risks/GCR policy even though it’s not explicitly about x-risks/GCRs or even about risk (or where the primary reason it’s relevant isn’t because it’s about risk). I think you’ve correctly identified that a notable aspect of x-risk/GCR policymaking is that it must contend with substantial risk and uncertainty, and thus that we can learn useful lessons from other policymaking efforts that must contend with substantial risk and uncertainty, even if they aren’t explicitly or closely related to x-risks/GCRs. But I think other notable aspects, which likewise allow for learning from other policy areas, include:
Huge externalities within and between borders and generations
GCR reduction is in most cases like a global public good, and x-risk reduction is a transgenerational global public good
So we can learn from e.g. climate change policymaking for reasons unrelated to climate change itself being a GCR and involving risk and uncertainty
Presumably we can also learn from things like policymaking to prevent ozone depletion
Key role of emerging technologies
Key role of great power relations and perhaps “race” dynamics (including but not limited to arms races)
I guess this also suggests the high-level question “What is distinctive and challenging about x-risk/GCR policymaking, and what other areas can we therefore learn from?” (Maybe you already covered this and I missed it.) One previous research effort along those lines (but focused on AI specifically) which I found interesting was this from MIRI in 2013.
I might add something about “What are the actions funders can take to influence policy outcomes? How well does each tend to work? In what situations are they most appropriate? How can they best be implemented?”
Example actions include funding advocacy campaigns, funding research, funding track 1.5/track 2 dialogues, and funding capacity building stuff (like fellowships that create the next batch of experts).
It might be that some actions tend to be more effective overall, some are more effective for particular risks, and some are more effective during certain types of policy windows while others are the best move when no particular window is open.
I think a related idea would be “What are the best ways of assessing what policy proposals will be tractable and what framings are best? How good are they? ” E.g., how much should we invest in polling, message testing, talking to experts/grantmakers/campaigners/etc., or Tetlock-style forecasting to assess what policies might get public and policymaker support and what framings might best support them?
(We could also of course ask “What policy proposals will be tractable and what framings are best?”, and use those methods to answer it, but then that’s not meta policy research.)
I’d probably add some things related to forecasting and maybe foresight, scenario planning, horizon scanning, and maybe red-teaming (of ideas).
I think you could this as similar to how you highlight the science-policy interface (likewise, forecasting should in theory be an important input into policymaking) or similar to how you and I respectively highlight risk+uncertainty and externalities+emerging tech (one distinctive thing about x-risk/GCR policy is how relevant forecasting etc is).
A related thing is “trying to act well in advance of a problem occurring or even its shape or importance being clear to most people”. One case study often mentioned in this context is Leo Szilard and nuclear weapons (e.g. here). See also https://forum.effectivealtruism.org/tag/long-range-forecasting One could in theory look into how often people have attempted to influence policy in that sort of way or on that sort of issue, when and how it’s gone well or poorly, etc.
You could check out this draft research agenda of mine for additional question ideas. (I now you’ve already seen it, but perhaps it’s worth glancing at it again in light of this particular project, and I’m also putting the link here for other people’s potential benefit.)
Thanks Michael, all great points and really useful additions. I’ve added those in. Your draft research agenda was definitely inspiration for this work, though I realise I hadn’t looked at it in a while, so thanks for re-sharing. It also shows that each meta-policy question can be broken down into all sorts of mini-meta policy questions. I’ll be keen to speak with you about how you’ve approached prioritising across them all.
Glad it was helpful!
Unfortunately the summary on that is that I haven’t really done any further work developing the agenda, prioritising across its questions, or actually working on the questions. (Where “further work” means “since initially writing and sharing the agenda.)
That said:
Some RP interns did produce some outputs related to the agenda:
https://forum.effectivealtruism.org/posts/f8Cc4XikFGMdrZJAa/towards-a-longtermist-framework-for-evaluating-democracy-1
https://forum.effectivealtruism.org/posts/MLfvPMZFWx4jLZrNy/key-characteristics-for-evaluating-future-global-governance
https://forum.effectivealtruism.org/posts/Ds2PCjKgztXtQrqAF/disentangling-improving-institutional-decision-making-2
https://forum.effectivealtruism.org/posts/E4QnGsXLEEcNysADT/issues-with-futarchy
We’re currently hiring for a Longtermism researcher, and it’s possible this person would end up working on things related to that agenda
Thanks for starting this discussion! I have essentially the same comment as David, just a different body of literature: policy process studies.
We reviewed the field in the context of our Computational Policy Process Studies paper (section 1.1).From that, I recommend Paul Cairney’s work, e.g. Understanding public policy (2019), and Weible & Sabatier’s Theories of the Policy Process (2018).
Section 4 of the Computational Process Studies paper contains research directions we think are promising and can be investigated with other methods, too. The paper was accepted by Complexity and is currently undergoing revisions—the reviewers liked our summary and thrust, just the maths is too basic for the audience, so we’re expanding the model. Section 1 of our Long-term Institutional Fit working paper (update in the works, too) also ends with concrete questions we’d like answered.