I think some form of AI-assited governance have great potential.
However, it seems like several of these ideas are (in theory) possible in some format today—yet in practice don’t get adopted. E.g.
Enhancing epistemics and decision-making processes at the top levels of organizations, leading to more informed and rational strategies.
I think it’s very hard to get even the most basic forms of good epistemic practices (e.g. putting probabilities on helpful, easy-to-forecast statements) embedded at the top levels of organizations (for standard moral maze-type reasons).
As such I think the role of AI here is pretty limited—the main bottleneck to adoption is political / bureacratic, rather than technological.
I’d guess the way to make progress here is in aligning [implementation of AI-assisted governance] with [incentives of influential people in the organization] - i.e. you first have to get the organization to actually care about good goverance (perhaps by joining it, or using external levers).
[Of course, if we go through crazy explosive AI-driven growth then maybe the existing model of large organizations being slow will no longer be true—and hence there would be more scope for AI-assisted governance]
I definitely agree that it’s difficult to get organizations to improve governance. External pressure seems critical.
As stated in the post, I think that it’s possible that external pressure could come to AI capabilities organizations, in the form of regulation. Hard, but possible.
I’d (gently) push back against this part: > I think it’s very hard to get even the most basic forms of good epistemic practices
I think that there are clearly some practices that seem good that don’t get used. But there are many ones that do get used, especially at well-run companies. In fact, I’d go so far to say that at least for the issue of “performance and capability” (rather than alignment/oversight), I’d trust the best-run organizations today a lot more than EA ideas of good techniques.
These organizations are often highly meritocratic, very intelligent, and leaders are good at cutting out the BS and honing in on key problems (at least, when doing so is useful to them).
I expect that our techniques like probabilities and forecastable statements just aren’t that great at these top levels. If much better practices come out, using AI, I’d feel good about them being used.
Or, at least for the part of “AIs helping organizations make tons of money by suggesting strategies and changes”, I’d expect businesses to be fairly efficient.
I think some form of AI-assited governance have great potential.
However, it seems like several of these ideas are (in theory) possible in some format today—yet in practice don’t get adopted. E.g.
I think it’s very hard to get even the most basic forms of good epistemic practices (e.g. putting probabilities on helpful, easy-to-forecast statements) embedded at the top levels of organizations (for standard moral maze-type reasons).
As such I think the role of AI here is pretty limited—the main bottleneck to adoption is political / bureacratic, rather than technological.
I’d guess the way to make progress here is in aligning [implementation of AI-assisted governance] with [incentives of influential people in the organization] - i.e. you first have to get the organization to actually care about good goverance (perhaps by joining it, or using external levers).
[Of course, if we go through crazy explosive AI-driven growth then maybe the existing model of large organizations being slow will no longer be true—and hence there would be more scope for AI-assisted governance]
I definitely agree that it’s difficult to get organizations to improve governance. External pressure seems critical.
As stated in the post, I think that it’s possible that external pressure could come to AI capabilities organizations, in the form of regulation. Hard, but possible.
I’d (gently) push back against this part:
> I think it’s very hard to get even the most basic forms of good epistemic practices
I think that there are clearly some practices that seem good that don’t get used. But there are many ones that do get used, especially at well-run companies. In fact, I’d go so far to say that at least for the issue of “performance and capability” (rather than alignment/oversight), I’d trust the best-run organizations today a lot more than EA ideas of good techniques.
These organizations are often highly meritocratic, very intelligent, and leaders are good at cutting out the BS and honing in on key problems (at least, when doing so is useful to them).
I expect that our techniques like probabilities and forecastable statements just aren’t that great at these top levels. If much better practices come out, using AI, I’d feel good about them being used.
Or, at least for the part of “AIs helping organizations make tons of money by suggesting strategies and changes”, I’d expect businesses to be fairly efficient.