all behaviour can be interpreted as maximising a utility function.
Yes, it indeed can be. However, the less coherent the agent acts, the more cumbersome it will be to describe it as an expected utility maximiser. Once your utility function specifies entire histories of the universe, its description length goes through the roof. If describing a system as a decision theoretic agent is that cumbersome, it’s probably better to look for some other model to predict its behaviour. A rock, for example, is not well described as a decision theoretic agent. You can technically specify a utility function that does the job, but it’s a ludicrously large one.
The less coherent and smart a system acts, the longer the utility function you need to specify to model its behaviour as a decision theoretic agent will be. In this sense, expected-utility-maximisation does rule things out, though the boundary is not binary. It’s telling you what kind of systems you can usefully model as “making decisions” if you want to predict their actions.
If you would prefer math that talks about the actual internal structures agents themselves consist of, decision theory is not the right field to look at. It just does not address questions like this at all. Nowhere in the theorems will you find a requirement that an agent’s preferences be somehow explicitly represented in the algorithms it “actually uses” to make decisions, whatever that would mean. It doesn’t know what these algorithms are, and doesn’t even have the vocabulary to formulate questions about them. It’s like saying we can’t use theorems for natural numbers to make statements about counting sheep, because sheep are really made of fibre bundles over the complex numbers, rather than natural numbers. The natural numbers are talking about our count of the sheep, not the physics of the sheep themselves, nor the physics of how we move our eyes to find the sheep. And decision theory is talking about our model of systems as agents that make decisions, not the physics of the systems themselves and how some parts of them may or may not correspond to processes that meet some yet unknown embedded-in-physics definition of “making a decision”.
I think this response misses the woods for the trees here. It’s true that you can fit some utility function to behaviour, if you make a more fine-grained outcome-space on which preferences are now coherent etc. But this removes basically all of the predictive content that Eliezer etc. assumes when invoking them.
In particular, the use of these theorems in doomer arguments absolutely does implicitly care about “internal structure” stuff—e.g. one major premise is that non-EU-maximising AI’s will reflectively iron out the “wrinkles” in their preferences to better approximate an EU-maximiser, since they will notice that their e.g. incompleteness leads to exploitability. The OP argument shows that an incomplete-preference agent will be inexploitable by its own lights. The fact that there’s some completely different way to refactor the outcome-space such that from the outside it looks like an EU-maximiser is just irrelevant.
>If describing a system as a decision theoretic agent is that cumbersome, it’s probably better to look for some other model to predict its behaviour
This also seems to be begging the question—if I have something I think I can describe as a non-EU-maximising decision-theoretic agent, but which has to be described with an incredibly cumbersome utility function, why do we not just conclude that EU-maximisation is the wrong way to model the agent, rather than throwing out the belief that is should be modelled as an agent. If I have a preferential gap between A and B, and you have to jump through some ridiculous hoops to make this look EU-coherent ( “he prefers [A and Tuesday and feeling slightly hungry and saw some friends yesterday and the price of blueberries is <£1 and....] to [B and Wednesday and full and at a party and blueberries >£1 and...]” ), seems like the correct conclusion is not to throw away me being a decision-theoretic agent, but me being well-modelled as an EU-maximiser
>The less coherent and smart a system acts, the longer the utility function you need to specify...
These are two very different concepts? (Equating “coherent” with “smart” is again kinda begging the question). Re: coherence, it’s just tautologous that the more complexly you have to partition up outcome-space to make things look coherent, the more complex the resulting utility function will be. Re: smartness, if we’re operationalising this as “ability to steer the world towards states of higher utility”, then it seems like smartness and utility-function-complexity are by definition independent. Unless you mean more “ability to steer the world in a way that seems legible to us” in which case it’s again just tautologous
That all sounds approximately right but I’m struggling to see how it bears on this point:
If we want expected-utility-maximisation to rule anything out, we need to say something about the objects of the agent’s preference. And once we do that, we can observe violations of Completeness.
Yes, it indeed can be. However, the less coherent the agent acts, the more cumbersome it will be to describe it as an expected utility maximiser. Once your utility function specifies entire histories of the universe, its description length goes through the roof. If describing a system as a decision theoretic agent is that cumbersome, it’s probably better to look for some other model to predict its behaviour. A rock, for example, is not well described as a decision theoretic agent. You can technically specify a utility function that does the job, but it’s a ludicrously large one.
The less coherent and smart a system acts, the longer the utility function you need to specify to model its behaviour as a decision theoretic agent will be. In this sense, expected-utility-maximisation does rule things out, though the boundary is not binary. It’s telling you what kind of systems you can usefully model as “making decisions” if you want to predict their actions.
If you would prefer math that talks about the actual internal structures agents themselves consist of, decision theory is not the right field to look at. It just does not address questions like this at all. Nowhere in the theorems will you find a requirement that an agent’s preferences be somehow explicitly represented in the algorithms it “actually uses” to make decisions, whatever that would mean. It doesn’t know what these algorithms are, and doesn’t even have the vocabulary to formulate questions about them. It’s like saying we can’t use theorems for natural numbers to make statements about counting sheep, because sheep are really made of fibre bundles over the complex numbers, rather than natural numbers. The natural numbers are talking about our count of the sheep, not the physics of the sheep themselves, nor the physics of how we move our eyes to find the sheep. And decision theory is talking about our model of systems as agents that make decisions, not the physics of the systems themselves and how some parts of them may or may not correspond to processes that meet some yet unknown embedded-in-physics definition of “making a decision”.
I think this response misses the woods for the trees here. It’s true that you can fit some utility function to behaviour, if you make a more fine-grained outcome-space on which preferences are now coherent etc. But this removes basically all of the predictive content that Eliezer etc. assumes when invoking them.
In particular, the use of these theorems in doomer arguments absolutely does implicitly care about “internal structure” stuff—e.g. one major premise is that non-EU-maximising AI’s will reflectively iron out the “wrinkles” in their preferences to better approximate an EU-maximiser, since they will notice that their e.g. incompleteness leads to exploitability. The OP argument shows that an incomplete-preference agent will be inexploitable by its own lights. The fact that there’s some completely different way to refactor the outcome-space such that from the outside it looks like an EU-maximiser is just irrelevant.
>If describing a system as a decision theoretic agent is that cumbersome, it’s probably better to look for some other model to predict its behaviour
This also seems to be begging the question—if I have something I think I can describe as a non-EU-maximising decision-theoretic agent, but which has to be described with an incredibly cumbersome utility function, why do we not just conclude that EU-maximisation is the wrong way to model the agent, rather than throwing out the belief that is should be modelled as an agent. If I have a preferential gap between A and B, and you have to jump through some ridiculous hoops to make this look EU-coherent ( “he prefers [A and Tuesday and feeling slightly hungry and saw some friends yesterday and the price of blueberries is <£1 and....] to [B and Wednesday and full and at a party and blueberries >£1 and...]” ), seems like the correct conclusion is not to throw away me being a decision-theoretic agent, but me being well-modelled as an EU-maximiser
>The less coherent and smart a system acts, the longer the utility function you need to specify...
These are two very different concepts? (Equating “coherent” with “smart” is again kinda begging the question). Re: coherence, it’s just tautologous that the more complexly you have to partition up outcome-space to make things look coherent, the more complex the resulting utility function will be. Re: smartness, if we’re operationalising this as “ability to steer the world towards states of higher utility”, then it seems like smartness and utility-function-complexity are by definition independent. Unless you mean more “ability to steer the world in a way that seems legible to us” in which case it’s again just tautologous
That all sounds approximately right but I’m struggling to see how it bears on this point:
Can you explain?