However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can’t have, even theoretically.
I’m still not sure why you strongly think there’s _no_ principled way; it seems hard to prove a negative. I mentioned that we could make progress on logical counterfactuals; there’s also the approach Chalmers talks about here. (I buy that there’s reason to suspect there’s no principled way if you’re not impressed by any proposal so far).
And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).
I don’t think this follows. The universal prior is not objective; you can “prove” that any bit probably follows from a given sequence, by changing your reference machine. But I don’t think this is too problematic. We just accept that some things don’t have a super clean objective answer. The reference machines that make odd predictions (e.g. that 000000000 is probably followed by 1) look weird, although it’s hard to precisely say what’s weird about them without making reference to another reference machine. I don’t think this kind of non-objectivity implies any kind of inconsistency.
Similarly, even if objective approaches to computational interpretations fail, we could get a state where computational interpretations are non-objective (e.g. defined relative to a “reference machine”) and the reference machines that make very weird predictions (like the popcorn implementing a cat) would look super weird to humans. This doesn’t seem like a fatal flaw to me, for the same reason it’s not a fatal flaw in the case of the universal prior.
What you’re saying seems very reasonable; I don’t think we differ on any facts, but we do have some divergent intuitions on implications.
I suspect this question—whether it’s possible to offer a computational description of moral value that could cleanly ‘compile’ to physics—would have non-trivial yet also fairly modest implications for most of MIRI’s current work.
I would expect the significance of this question to go up over time, both in terms of direct work MIRI expects to do, and in terms of MIRI’s ability to strategically collaborate with other organizations. I.e., when things shift from “let’s build alignable AGI” to “let’s align the AGI”, it would be very good to have some of this metaphysical fog cleared away so that people could get on the same ethical page, and see that they are in fact on the same page.
I’m still not sure why you strongly think there’s _no_ principled way; it seems hard to prove a negative. I mentioned that we could make progress on logical counterfactuals; there’s also the approach Chalmers talks about here. (I buy that there’s reason to suspect there’s no principled way if you’re not impressed by any proposal so far).
I don’t think this follows. The universal prior is not objective; you can “prove” that any bit probably follows from a given sequence, by changing your reference machine. But I don’t think this is too problematic. We just accept that some things don’t have a super clean objective answer. The reference machines that make odd predictions (e.g. that 000000000 is probably followed by 1) look weird, although it’s hard to precisely say what’s weird about them without making reference to another reference machine. I don’t think this kind of non-objectivity implies any kind of inconsistency.
Similarly, even if objective approaches to computational interpretations fail, we could get a state where computational interpretations are non-objective (e.g. defined relative to a “reference machine”) and the reference machines that make very weird predictions (like the popcorn implementing a cat) would look super weird to humans. This doesn’t seem like a fatal flaw to me, for the same reason it’s not a fatal flaw in the case of the universal prior.
What you’re saying seems very reasonable; I don’t think we differ on any facts, but we do have some divergent intuitions on implications.
I suspect this question—whether it’s possible to offer a computational description of moral value that could cleanly ‘compile’ to physics—would have non-trivial yet also fairly modest implications for most of MIRI’s current work.
I would expect the significance of this question to go up over time, both in terms of direct work MIRI expects to do, and in terms of MIRI’s ability to strategically collaborate with other organizations. I.e., when things shift from “let’s build alignable AGI” to “let’s align the AGI”, it would be very good to have some of this metaphysical fog cleared away so that people could get on the same ethical page, and see that they are in fact on the same page.