I think some form of AI-assited governance have great potential.
However, it seems like several of these ideas are (in theory) possible in some format todayâyet in practice donât get adopted. E.g.
Enhancing epistemics and decision-making processes at the top levels of organizations, leading to more informed and rational strategies.
I think itâs very hard to get even the most basic forms of good epistemic practices (e.g. putting probabilities on helpful, easy-to-forecast statements) embedded at the top levels of organizations (for standard moral maze-type reasons).
As such I think the role of AI here is pretty limitedâthe main bottleneck to adoption is political /â bureacratic, rather than technological.
Iâd guess the way to make progress here is in aligning [implementation of AI-assisted governance] with [incentives of influential people in the organization] - i.e. you first have to get the organization to actually care about good goverance (perhaps by joining it, or using external levers).
[Of course, if we go through crazy explosive AI-driven growth then maybe the existing model of large organizations being slow will no longer be trueâand hence there would be more scope for AI-assisted governance]
I definitely agree that itâs difficult to get organizations to improve governance. External pressure seems critical.
As stated in the post, I think that itâs possible that external pressure could come to AI capabilities organizations, in the form of regulation. Hard, but possible.
Iâd (gently) push back against this part: > I think itâs very hard to get even the most basic forms of good epistemic practices
I think that there are clearly some practices that seem good that donât get used. But there are many ones that do get used, especially at well-run companies. In fact, Iâd go so far to say that at least for the issue of âperformance and capabilityâ (rather than alignment/âoversight), Iâd trust the best-run organizations today a lot more than EA ideas of good techniques.
These organizations are often highly meritocratic, very intelligent, and leaders are good at cutting out the BS and honing in on key problems (at least, when doing so is useful to them).
I expect that our techniques like probabilities and forecastable statements just arenât that great at these top levels. If much better practices come out, using AI, Iâd feel good about them being used.
Or, at least for the part of âAIs helping organizations make tons of money by suggesting strategies and changesâ, Iâd expect businesses to be fairly efficient.
I think some form of AI-assited governance have great potential.
However, it seems like several of these ideas are (in theory) possible in some format todayâyet in practice donât get adopted. E.g.
I think itâs very hard to get even the most basic forms of good epistemic practices (e.g. putting probabilities on helpful, easy-to-forecast statements) embedded at the top levels of organizations (for standard moral maze-type reasons).
As such I think the role of AI here is pretty limitedâthe main bottleneck to adoption is political /â bureacratic, rather than technological.
Iâd guess the way to make progress here is in aligning [implementation of AI-assisted governance] with [incentives of influential people in the organization] - i.e. you first have to get the organization to actually care about good goverance (perhaps by joining it, or using external levers).
[Of course, if we go through crazy explosive AI-driven growth then maybe the existing model of large organizations being slow will no longer be trueâand hence there would be more scope for AI-assisted governance]
I definitely agree that itâs difficult to get organizations to improve governance. External pressure seems critical.
As stated in the post, I think that itâs possible that external pressure could come to AI capabilities organizations, in the form of regulation. Hard, but possible.
Iâd (gently) push back against this part:
> I think itâs very hard to get even the most basic forms of good epistemic practices
I think that there are clearly some practices that seem good that donât get used. But there are many ones that do get used, especially at well-run companies. In fact, Iâd go so far to say that at least for the issue of âperformance and capabilityâ (rather than alignment/âoversight), Iâd trust the best-run organizations today a lot more than EA ideas of good techniques.
These organizations are often highly meritocratic, very intelligent, and leaders are good at cutting out the BS and honing in on key problems (at least, when doing so is useful to them).
I expect that our techniques like probabilities and forecastable statements just arenât that great at these top levels. If much better practices come out, using AI, Iâd feel good about them being used.
Or, at least for the part of âAIs helping organizations make tons of money by suggesting strategies and changesâ, Iâd expect businesses to be fairly efficient.