It seems like you’re saying here that there won’t be clean rules for determining logical counterfactuals? I agree this might be the case but it doesn’t seem clear to me. Logical counterfactuals seem pretty confusing and there seems to be a lot of room for better theories about them.
Right, and I would argue that logical counterfactuals (in the way we’ve mentioned them in this thread) will necessarily be intractably confusing, because they’re impossible to do cleanly. I say this because in the “P & C” example above, we need a frame-invariant way to interpret a change in C in terms of P. However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can’t have, even theoretically.
(Unless we define both physics and computation through something like constructor theory… at which point we’re not really talking about Turing machines as we know them—we’d be talking about physics by another name.)
This is a big part of the reason why I’m a big advocate of trying to define moral value in physical terms: if we start with physics, then we know our conclusions will ‘compile’ to physics. If instead we start with the notion that ‘some computations have more moral value than others’, we’re stuck with the problem—intractable problem, I argue—that we don’t have a frame-invariant way to precisely identify what computations are happening in any physical system (and likewise, which aren’t happening). I.e., statements about computations will never cleanly compile to physical terms. And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).
Does that argument make sense?
… that said, it would seem very valuable to make a survey of possible levels of abstraction at which one could attempt to define moral value, and their positives & negatives.
I think we have a lot more theoretical progress to make on understanding consciousness and ethics. On priors I’d expect the theoretical progress to produce more-satisfying things over time without ever producing a complete answer to ethics. Though of course I could be wrong here; it seems like intuitions vary a lot. It seems more likely to me that we find a simple unifying theory for consciousness than ethics.
However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can’t have, even theoretically.
I’m still not sure why you strongly think there’s _no_ principled way; it seems hard to prove a negative. I mentioned that we could make progress on logical counterfactuals; there’s also the approach Chalmers talks about here. (I buy that there’s reason to suspect there’s no principled way if you’re not impressed by any proposal so far).
And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).
I don’t think this follows. The universal prior is not objective; you can “prove” that any bit probably follows from a given sequence, by changing your reference machine. But I don’t think this is too problematic. We just accept that some things don’t have a super clean objective answer. The reference machines that make odd predictions (e.g. that 000000000 is probably followed by 1) look weird, although it’s hard to precisely say what’s weird about them without making reference to another reference machine. I don’t think this kind of non-objectivity implies any kind of inconsistency.
Similarly, even if objective approaches to computational interpretations fail, we could get a state where computational interpretations are non-objective (e.g. defined relative to a “reference machine”) and the reference machines that make very weird predictions (like the popcorn implementing a cat) would look super weird to humans. This doesn’t seem like a fatal flaw to me, for the same reason it’s not a fatal flaw in the case of the universal prior.
What you’re saying seems very reasonable; I don’t think we differ on any facts, but we do have some divergent intuitions on implications.
I suspect this question—whether it’s possible to offer a computational description of moral value that could cleanly ‘compile’ to physics—would have non-trivial yet also fairly modest implications for most of MIRI’s current work.
I would expect the significance of this question to go up over time, both in terms of direct work MIRI expects to do, and in terms of MIRI’s ability to strategically collaborate with other organizations. I.e., when things shift from “let’s build alignable AGI” to “let’s align the AGI”, it would be very good to have some of this metaphysical fog cleared away so that people could get on the same ethical page, and see that they are in fact on the same page.
Right, and I would argue that logical counterfactuals (in the way we’ve mentioned them in this thread) will necessarily be intractably confusing, because they’re impossible to do cleanly. I say this because in the “P & C” example above, we need a frame-invariant way to interpret a change in C in terms of P. However, we can only have such a frame-invariant way if there exists a clean mapping (injection, surjection, bijection, etc) between P&C- which I think we can’t have, even theoretically.
(Unless we define both physics and computation through something like constructor theory… at which point we’re not really talking about Turing machines as we know them—we’d be talking about physics by another name.)
This is a big part of the reason why I’m a big advocate of trying to define moral value in physical terms: if we start with physics, then we know our conclusions will ‘compile’ to physics. If instead we start with the notion that ‘some computations have more moral value than others’, we’re stuck with the problem—intractable problem, I argue—that we don’t have a frame-invariant way to precisely identify what computations are happening in any physical system (and likewise, which aren’t happening). I.e., statements about computations will never cleanly compile to physical terms. And whenever we have multiple incompatible interpretations, we necessarily get inconsistencies, and we can prove anything is true (i.e., we can prove any arbitrary physical system is superior to any other).
Does that argument make sense?
… that said, it would seem very valuable to make a survey of possible levels of abstraction at which one could attempt to define moral value, and their positives & negatives.
Totally agreed!
I’m still not sure why you strongly think there’s _no_ principled way; it seems hard to prove a negative. I mentioned that we could make progress on logical counterfactuals; there’s also the approach Chalmers talks about here. (I buy that there’s reason to suspect there’s no principled way if you’re not impressed by any proposal so far).
I don’t think this follows. The universal prior is not objective; you can “prove” that any bit probably follows from a given sequence, by changing your reference machine. But I don’t think this is too problematic. We just accept that some things don’t have a super clean objective answer. The reference machines that make odd predictions (e.g. that 000000000 is probably followed by 1) look weird, although it’s hard to precisely say what’s weird about them without making reference to another reference machine. I don’t think this kind of non-objectivity implies any kind of inconsistency.
Similarly, even if objective approaches to computational interpretations fail, we could get a state where computational interpretations are non-objective (e.g. defined relative to a “reference machine”) and the reference machines that make very weird predictions (like the popcorn implementing a cat) would look super weird to humans. This doesn’t seem like a fatal flaw to me, for the same reason it’s not a fatal flaw in the case of the universal prior.
What you’re saying seems very reasonable; I don’t think we differ on any facts, but we do have some divergent intuitions on implications.
I suspect this question—whether it’s possible to offer a computational description of moral value that could cleanly ‘compile’ to physics—would have non-trivial yet also fairly modest implications for most of MIRI’s current work.
I would expect the significance of this question to go up over time, both in terms of direct work MIRI expects to do, and in terms of MIRI’s ability to strategically collaborate with other organizations. I.e., when things shift from “let’s build alignable AGI” to “let’s align the AGI”, it would be very good to have some of this metaphysical fog cleared away so that people could get on the same ethical page, and see that they are in fact on the same page.