Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.
For example, “True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success—it would just partially counterbalance them.” My intuition says the opposite of this. I don’t think it’s at all clear (whether increasing the capability of the U.S. military is a good or bad thing).
I agree that object-level progress is to be preferred over meta-level progress on methodology.
Here’s some support for that claim which I didn’t write out.
There was a hypothesis called “risk homeostasis” where people always accept the same level of risk. E.g. it doesn’t matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It’s a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver—just has a fixed risk constraint, and beyond that wants nothing but speed. The state—just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn’t going to spend all of it on going faster. She’ll spend some on going faster, then some on being safer.
Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.
Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.
Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn’t constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.
I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you’re saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.
Well, I’m not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.
Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.
For example, “True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success—it would just partially counterbalance them.” My intuition says the opposite of this. I don’t think it’s at all clear (whether increasing the capability of the U.S. military is a good or bad thing).
I agree that object-level progress is to be preferred over meta-level progress on methodology.
Here’s some support for that claim which I didn’t write out.
There was a hypothesis called “risk homeostasis” where people always accept the same level of risk. E.g. it doesn’t matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It’s a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver—just has a fixed risk constraint, and beyond that wants nothing but speed. The state—just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn’t going to spend all of it on going faster. She’ll spend some on going faster, then some on being safer.
Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.
Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.
Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn’t constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.
I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you’re saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.
Well, I’m not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.