Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.
You said: âWe lack the strategic clarity ⌠[about] intermediate goalsâ. Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:
I caution that several people have tried this ⌠such work is very hard
This surprised me when I read it. In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.
1. Reading longtermist research and not seeing much work of this type.
I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe itâs there and Iâve just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.
2. Talking to key people in the longtermist space and being told this research is not happening.
For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc.
3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance
I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time). This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /âAI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.
â â
So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider âstrategic clarity on intermediate goalsâ.
In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and âintermediate goal settingâ and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermistsâ work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management). This approach leaves me thinking that there is almost no longtermist âintermediate goal settingâ happening. Yet maybe you have a very different idea of what âintermediate goal setting involvesâ based on other fields you have worked in.
It might also be that we read different materials and talk to different people. It might be that this work has happened Iâve just missed it or not read the right stuff.
â â Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.
As far as I know itâs true that there isnât much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who werenât already sympathetic to the workâs conclusions (about which intermediate goals are vs. arenât worth aiming for, or about the worldview cruxes underlying those disagreements). There isnât even consensus about intermediate goals such as the âmake government generically smarter about AI policyâ goals you suggested, though in some (not all) cases the objection to that category is less âitâs net harmfulâ and more âit wonât be that important /â decisive.â
Thank you Luke â great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!
(FWIW Minor point but I am not sure I would phrase a goal as âmake government generically smarter about AI policyâ just being âsmartâ is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)
Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.
You said: âWe lack the strategic clarity ⌠[about] intermediate goalsâ. Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:
This surprised me when I read it. In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.
1. Reading longtermist research and not seeing much work of this type.
I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe itâs there and Iâve just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.
2. Talking to key people in the longtermist space and being told this research is not happening.
For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc.
3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance
I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time). This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /âAI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.
â â
So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider âstrategic clarity on intermediate goalsâ.
In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and âintermediate goal settingâ and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermistsâ work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management). This approach leaves me thinking that there is almost no longtermist âintermediate goal settingâ happening. Yet maybe you have a very different idea of what âintermediate goal setting involvesâ based on other fields you have worked in.
It might also be that we read different materials and talk to different people. It might be that this work has happened Iâve just missed it or not read the right stuff.
â â
Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.
As far as I know itâs true that there isnât much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who werenât already sympathetic to the workâs conclusions (about which intermediate goals are vs. arenât worth aiming for, or about the worldview cruxes underlying those disagreements). There isnât even consensus about intermediate goals such as the âmake government generically smarter about AI policyâ goals you suggested, though in some (not all) cases the objection to that category is less âitâs net harmfulâ and more âit wonât be that important /â decisive.â
Thank you Luke â great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!
(FWIW Minor point but I am not sure I would phrase a goal as âmake government generically smarter about AI policyâ just being âsmartâ is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)