AI governance is a fairly nascent field. As the field grows and we build up our understanding of it, people will likely specialise in sub-parts of the problem. But for now, I think there’s benefit to having this broad category, for a few reasons:
There’s a decent overlap in expertise needed to address these questions. By thinking about the first, I’ll probably build up knowledge and intuitions that will be applicable to the second. For example, I might want to think about how previous powerful technologies such as nuclear weapons came to be developed and deployed.
I don’t think we currently know what problems within AI governance are most pressing. Once we do, it seems prudent to specialise more.
This doesn’t mean you shouldn’t think of problems of type a and b separately. You probably should.
There’s a decent overlap in expertise needed to address these questions.
This doesn’t yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that’s the analogy that’s being gestured at. But a structural risk of inequality doesn’t seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs.
I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I’m set up to demand large amounts of explanatory labor from you. I don’t plan to do that, just wanted to acknowledge it.
It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn’t be my first port of call. But it’s not clear to me that this means we shouldn’t have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.
I don’t think we currently know what problems within AI governance are most pressing. Once we do, it seems prudent to specialise more.
It makes sense not to specialize early, but I’m still confused about what the category is. For example, the closest thing to a definition in this post (btw, not a criticism if a definition is missing in this post. Perhaps it’s aimed at people with more context than me) seems to be:
AI governance concerns how humanity can best navigate the transition to a world with advanced AI systems
To me, that seems to be synonymous with the AI risk problem in its entirety. A first guess at what might be meant by AI governance is “all the non-technical stuff that we need to sort out regarding AI risk”. Wonder if that’s close to the mark?
A first guess at what might be meant by AI governance is “all the non-technical stuff that we need to sort out regarding AI risk”. Wonder if that’s close to the mark?
A great first guess! It’s basically my favourite definition, though negative definitions probably aren’t all that satisfactory either.
We can make it more precise by saying (I’m not sure what the origin of this one is, it might be Jade Leung or Allan Dafoe):
AI governance has a descriptive part, focusing on the context and institutions that shape the incentives and behaviours of developers and users of AI, and a normative part, asking how should we navigate a transition to a world of advanced artificial intelligence?
It’s not quite the definition we want, but it’s a bit closer.
OK, thanks! The negative definition makes sense to me. I remain unconvinced that there is a positive definition that hits the same bundle of work, but I can see why we would want a handle for the non-technical work of AI risk mitigation (even before we know what the correct categories are within that).
I’ll drop in my 2c.
AI governance is a fairly nascent field. As the field grows and we build up our understanding of it, people will likely specialise in sub-parts of the problem. But for now, I think there’s benefit to having this broad category, for a few reasons:
There’s a decent overlap in expertise needed to address these questions. By thinking about the first, I’ll probably build up knowledge and intuitions that will be applicable to the second. For example, I might want to think about how previous powerful technologies such as nuclear weapons came to be developed and deployed.
I don’t think we currently know what problems within AI governance are most pressing. Once we do, it seems prudent to specialise more.
This doesn’t mean you shouldn’t think of problems of type a and b separately. You probably should.
This doesn’t yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that’s the analogy that’s being gestured at. But a structural risk of inequality doesn’t seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs.
I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I’m set up to demand large amounts of explanatory labor from you. I don’t plan to do that, just wanted to acknowledge it.
It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn’t be my first port of call. But it’s not clear to me that this means we shouldn’t have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.
Thanks for the response!
It makes sense not to specialize early, but I’m still confused about what the category is. For example, the closest thing to a definition in this post (btw, not a criticism if a definition is missing in this post. Perhaps it’s aimed at people with more context than me) seems to be:
To me, that seems to be synonymous with the AI risk problem in its entirety. A first guess at what might be meant by AI governance is “all the non-technical stuff that we need to sort out regarding AI risk”. Wonder if that’s close to the mark?
A great first guess! It’s basically my favourite definition, though negative definitions probably aren’t all that satisfactory either.
We can make it more precise by saying (I’m not sure what the origin of this one is, it might be Jade Leung or Allan Dafoe):
AI governance has a descriptive part, focusing on the context and institutions that shape the incentives and behaviours of developers and users of AI, and a normative part, asking how should we navigate a transition to a world of advanced artificial intelligence?
It’s not quite the definition we want, but it’s a bit closer.
OK, thanks! The negative definition makes sense to me. I remain unconvinced that there is a positive definition that hits the same bundle of work, but I can see why we would want a handle for the non-technical work of AI risk mitigation (even before we know what the correct categories are within that).