I think this is an interesting idea—that “a large bureaucracy pressures the leaders to be more aligned”—but it really doesn’t sit well with me.
In my experience, small bureaucracies often behave better than large ones. Startups infamously get worse and less efficient as they scale. I haven’t seen many analyses of, “the extra managers that come in help make sure that the newer CEO is more aligned.”
I generally expect similar with national governments.
Again though, it is key that “better governance” is actually better. It’s clearly possible to use AI for all kinds of things, many of which are negative. My main argument above is that some of those possibilities include “making the top levels of governance better and more aligned”, and that smart actors can pressure these situations to happen.
I don’t disagree with your final paragraph, and I think this is worth pursuing generally.
However, I do think we must consider the long-term implications of replacing long-established structures with AI. These structures have evolved over decades or centuries, and their dismantling carries significant risks.
Regarding startups: to me, it seems like their decline in efficiency as they scale is a form of regression to the mean. Startups that succeed do so because of their high-quality decision-making and leadership. As they grow, the decision-making pool expands, often including individuals who haven’t undergone the same rigorous selection process. This dilution can reduce overall alignment of decisions with those the founders would have made (a group already selected for decent decision-making quality, at least based on the limited metrics which cause startup survival).
Governments, unlike startups, do not emerge from such a competitive environment. They inherit established organizations with built-in checks and balances designed to enhance decision-making. These checks and balances, although contributing to larger bureaucracies, are probably useful for maintaining accountability and preventing poor decisions, even though they also prevent more drastic change when this is necessary. They also force the decision-maker to take into account another large group of stakeholders within the bureaucracy.
I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
These structures have evolved over decades or centuries, and their dismantling carries significant risks.
I don’t see my recommendations as advocating for a “dismantling”—it’s more like an “augmenting.”
I’m not at all recommend we move to replace our top executives with AIs anytime soon. I’m not sure if/when that might ever be necessary.
Rather, I think we could use AIs to help assist and oversee the most sensitive parts of what already exists. Like, top executives and politicians can use AI systems to give them advice, and can separately work with auditors who would use advanced AI tools to help them with oversight.
In my preferred world, I could even see it being useful to have more people in government, not less. Productivity could improve a lot in this sector, but also, this sector still seems like a very important one relative to others, and perhaps expectations could rise a lot too.
> I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
I agree these are both quite separate. I think AI systems could help with both though, and would prefer that they do.
I think this is an interesting idea—that “a large bureaucracy pressures the leaders to be more aligned”—but it really doesn’t sit well with me.
In my experience, small bureaucracies often behave better than large ones. Startups infamously get worse and less efficient as they scale. I haven’t seen many analyses of, “the extra managers that come in help make sure that the newer CEO is more aligned.”
I generally expect similar with national governments.
Again though, it is key that “better governance” is actually better. It’s clearly possible to use AI for all kinds of things, many of which are negative. My main argument above is that some of those possibilities include “making the top levels of governance better and more aligned”, and that smart actors can pressure these situations to happen.
I don’t disagree with your final paragraph, and I think this is worth pursuing generally.
However, I do think we must consider the long-term implications of replacing long-established structures with AI. These structures have evolved over decades or centuries, and their dismantling carries significant risks.
Regarding startups: to me, it seems like their decline in efficiency as they scale is a form of regression to the mean. Startups that succeed do so because of their high-quality decision-making and leadership. As they grow, the decision-making pool expands, often including individuals who haven’t undergone the same rigorous selection process. This dilution can reduce overall alignment of decisions with those the founders would have made (a group already selected for decent decision-making quality, at least based on the limited metrics which cause startup survival).
Governments, unlike startups, do not emerge from such a competitive environment. They inherit established organizations with built-in checks and balances designed to enhance decision-making. These checks and balances, although contributing to larger bureaucracies, are probably useful for maintaining accountability and preventing poor decisions, even though they also prevent more drastic change when this is necessary. They also force the decision-maker to take into account another large group of stakeholders within the bureaucracy.
I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
I don’t see my recommendations as advocating for a “dismantling”—it’s more like an “augmenting.”
I’m not at all recommend we move to replace our top executives with AIs anytime soon. I’m not sure if/when that might ever be necessary.
Rather, I think we could use AIs to help assist and oversee the most sensitive parts of what already exists. Like, top executives and politicians can use AI systems to give them advice, and can separately work with auditors who would use advanced AI tools to help them with oversight.
In my preferred world, I could even see it being useful to have more people in government, not less. Productivity could improve a lot in this sector, but also, this sector still seems like a very important one relative to others, and perhaps expectations could rise a lot too.
> I guess part of my point is that there is a big difference between alignment with the decision-maker and the quality of decision-making.
I agree these are both quite separate. I think AI systems could help with both though, and would prefer that they do.