We all agree that you should get utility. You are pointing out that FDT agents get more utility. But once they are already in the situation where they’ve been created by the demon, FDT agents get less utility. If you are the type of agent to follow FDT, you will get more utility, just as if you are the type of agent to follow CDT while being in a scenario that tortures FDTists, you’ll get more utility. The question of decision theory is, given the situation you are in, what gets you more utility—what is the rational thing to do. Eliezer’s turns you into the type of agent who often gets more utility, but that does not make it the right decision theory. The fact that you want to be the type of agent who does X doesn’t make doing X rational if doing X is bad for you and not doing X is rewarded artificially.
Again, there is no dispute about whether on average one boxers or two boxers get more utility or which kind of AI you should build.
We all agree that you should get utility. You are pointing out that FDT agents get more utility. But once they are already in the situation where they’ve been created by the demon, FDT agents get less utility. If you are the type of agent to follow FDT, you will get more utility, just as if you are the type of agent to follow CDT while being in a scenario that tortures FDTists, you’ll get more utility. The question of decision theory is, given the situation you are in, what gets you more utility—what is the rational thing to do. Eliezer’s turns you into the type of agent who often gets more utility, but that does not make it the right decision theory. The fact that you want to be the type of agent who does X doesn’t make doing X rational if doing X is bad for you and not doing X is rewarded artificially.
Again, there is no dispute about whether on average one boxers or two boxers get more utility or which kind of AI you should build.