There’s a decent overlap in expertise needed to address these questions.
This doesn’t yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that’s the analogy that’s being gestured at. But a structural risk of inequality doesn’t seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs.
I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I’m set up to demand large amounts of explanatory labor from you. I don’t plan to do that, just wanted to acknowledge it.
It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn’t be my first port of call. But it’s not clear to me that this means we shouldn’t have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.
This doesn’t yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that’s the analogy that’s being gestured at. But a structural risk of inequality doesn’t seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs.
I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I’m set up to demand large amounts of explanatory labor from you. I don’t plan to do that, just wanted to acknowledge it.
It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn’t be my first port of call. But it’s not clear to me that this means we shouldn’t have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.