The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?
That isn’t the assumption that’s meant to be driving the argument. I think there are two main factors:
(i) Pushing self-driving cars relative to other automation is likely to increase societal wisdom regarding automation faster. They are very visible and have macro-level effects, and will require us to develop new frameworks for dealing with them. In contrast, better AI in computer games has to a first approximation none of these effects, but could also feed into long-term automation capabilities.
(ii) Pushing for adoption of self-driving cars is useful relative to pushing for improvements in the underlying automation technology, because it will give us longer to deal with these issues for a given automation level (because we can assume that improvements in automation will continue regardless of adoption; although note that adoption may well speed up automation a bit too).
I actually think the assumption you mention is probably true too—because the rest of the economy is likely to continue to grow it will be cheaper relative to wealth to improve automation later, so it could go faster. But this effect seems rather smaller to me, and as increasing automation isn’t the only driver of increasing societal wisdom, I’m much more sceptical about whether it’s good to speed automation as a whole.
I’m still not sure I follow your argument in full. Consider two scenarios:
Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.
Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.
In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim that in scenario 2 these times will be spaced denser than in scenario 1. However, a priori it is possible to imagine that in scenario 2 all of these times occur later in time but with the same spacing.
I agree that it’s possible that your scenario 2 just shifts everything back uniformly in time, but think in expectation the spacing will be denser.
Toy model: looking at the spacing between self-driving cars and some future automation technology X. A major driver of the time X is adopted is technological sophistication. Whether or not we adopt self-driving cars now won’t have too much effect on the point when we reach technological sophistication level for technology X. If we had the same social position either way, this would mean that we would adopt X at roughly the same time regardless of when we adopt self-driving cars. Of course social views might be different depending on what happened with self-driving cars.
If we want to maximise the time between self-driving cars and X, we’d be best adopting the cars as soon as possible (given technological constraints), and pushing back adoption of X as long as possible.
Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.
I see what you’re thinking. We break the symmetry not by thinking that the next step is going to be closer in time, but that the next step(s) are going to be more important to get right than either self-driving cars or earlier automation.
In a way, the two are interchangeable: if we define “steps” as changes of given magnitude then faster change means more densely spaced steps.
There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.
Hi Owen and Sebastian,
The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?
That isn’t the assumption that’s meant to be driving the argument. I think there are two main factors:
(i) Pushing self-driving cars relative to other automation is likely to increase societal wisdom regarding automation faster. They are very visible and have macro-level effects, and will require us to develop new frameworks for dealing with them. In contrast, better AI in computer games has to a first approximation none of these effects, but could also feed into long-term automation capabilities.
(ii) Pushing for adoption of self-driving cars is useful relative to pushing for improvements in the underlying automation technology, because it will give us longer to deal with these issues for a given automation level (because we can assume that improvements in automation will continue regardless of adoption; although note that adoption may well speed up automation a bit too).
I actually think the assumption you mention is probably true too—because the rest of the economy is likely to continue to grow it will be cheaper relative to wealth to improve automation later, so it could go faster. But this effect seems rather smaller to me, and as increasing automation isn’t the only driver of increasing societal wisdom, I’m much more sceptical about whether it’s good to speed automation as a whole.
Thx for replying!
I’m still not sure I follow your argument in full. Consider two scenarios:
Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.
Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.
In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim that in scenario 2 these times will be spaced denser than in scenario 1. However, a priori it is possible to imagine that in scenario 2 all of these times occur later in time but with the same spacing.
What am I missing?
I agree that it’s possible that your scenario 2 just shifts everything back uniformly in time, but think in expectation the spacing will be denser.
Toy model: looking at the spacing between self-driving cars and some future automation technology X. A major driver of the time X is adopted is technological sophistication. Whether or not we adopt self-driving cars now won’t have too much effect on the point when we reach technological sophistication level for technology X. If we had the same social position either way, this would mean that we would adopt X at roughly the same time regardless of when we adopt self-driving cars. Of course social views might be different depending on what happened with self-driving cars.
If we want to maximise the time between self-driving cars and X, we’d be best adopting the cars as soon as possible (given technological constraints), and pushing back adoption of X as long as possible.
Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.
I see what you’re thinking. We break the symmetry not by thinking that the next step is going to be closer in time, but that the next step(s) are going to be more important to get right than either self-driving cars or earlier automation.
In a way, the two are interchangeable: if we define “steps” as changes of given magnitude then faster change means more densely spaced steps.
There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.