Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.
You said: “We lack the strategic clarity … [about] intermediate goals”. Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:
I caution that several people have tried this … such work is very hard
This surprised me when I read it. In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.
1. Reading longtermist research and not seeing much work of this type.
I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe it’s there and I’ve just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.
2. Talking to key people in the longtermist space and being told this research is not happening.
For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc.
3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance
I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time). This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /AI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.
– –
So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider “strategic clarity on intermediate goals”.
In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and “intermediate goal setting” and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermists’ work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management). This approach leaves me thinking that there is almost no longtermist “intermediate goal setting” happening. Yet maybe you have a very different idea of what “intermediate goal setting involves” based on other fields you have worked in.
It might also be that we read different materials and talk to different people. It might be that this work has happened I’ve just missed it or not read the right stuff.
– – Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.
As far as I know it’s true that there isn’t much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who weren’t already sympathetic to the work’s conclusions (about which intermediate goals are vs. aren’t worth aiming for, or about the worldview cruxes underlying those disagreements). There isn’t even consensus about intermediate goals such as the “make government generically smarter about AI policy” goals you suggested, though in some (not all) cases the objection to that category is less “it’s net harmful” and more “it won’t be that important / decisive.”
Thank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!
(FWIW Minor point but I am not sure I would phrase a goal as “make government generically smarter about AI policy” just being “smart” is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)
Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.
You said: “We lack the strategic clarity … [about] intermediate goals”. Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:
This surprised me when I read it. In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.
1. Reading longtermist research and not seeing much work of this type.
I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe it’s there and I’ve just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.
2. Talking to key people in the longtermist space and being told this research is not happening.
For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc.
3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance
I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time). This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /AI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.
– –
So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider “strategic clarity on intermediate goals”.
In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and “intermediate goal setting” and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermists’ work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management). This approach leaves me thinking that there is almost no longtermist “intermediate goal setting” happening. Yet maybe you have a very different idea of what “intermediate goal setting involves” based on other fields you have worked in.
It might also be that we read different materials and talk to different people. It might be that this work has happened I’ve just missed it or not read the right stuff.
– –
Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.
As far as I know it’s true that there isn’t much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who weren’t already sympathetic to the work’s conclusions (about which intermediate goals are vs. aren’t worth aiming for, or about the worldview cruxes underlying those disagreements). There isn’t even consensus about intermediate goals such as the “make government generically smarter about AI policy” goals you suggested, though in some (not all) cases the objection to that category is less “it’s net harmful” and more “it won’t be that important / decisive.”
Thank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!
(FWIW Minor point but I am not sure I would phrase a goal as “make government generically smarter about AI policy” just being “smart” is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)