Interesting! And nice to see ADT make an appearance ^_^
I want to point to where ADT+total utilitarianism diverges from SIA. Basically, SIA has no problem with extreme “Goldilocks” theories—theories that imply that only worlds almost exactly like the Earth have inhabitants. These theories are a priori unlikely (complexity penalty) but SIA is fine with them (if h1 is “only the Earth has life, but has it with certainty”, while h2 is “every planet has life with 50% probability”, then SIA loves h1 twice as much as h2).
ADT+total ut, however, cares about agents that reason similarly to us, even if they don’t evolve in exactly the same circumstances. So h2 weights much more than h1 for that theory.
This may be relevant to further developments of the argument.
Comment copied to new “Stuart Armstrong” account:
Interesting! And nice to see ADT make an appearance ^_^
I want to point to where ADT+total utilitarianism diverges from SIA. Basically, SIA has no problem with extreme “Goldilocks” theories—theories that imply that only worlds almost exactly like the Earth have inhabitants. These theories are a priori unlikely (complexity penalty) but SIA is fine with them (if h1 is “only the Earth has life, but has it with certainty”, while h2 is “every planet has life with 50% probability”, then SIA loves h1 twice as much as h2).
ADT+total ut, however, cares about agents that reason similarly to us, even if they don’t evolve in exactly the same circumstances. So h2 weights much more than h1 for that theory.
This may be relevant to further developments of the argument.