Hmm. Thanks for the example of the āpure timeā mapping of t --> mental states. Itās an interesting one. It reminds me of Max Tegmarkās mathematical universe hypothesis at ālevel 4,ā where, as far as I understand, all possible mathematical structures are taken to āexistā equally. This isnāt my current view, in part because Iām not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the āpure timeā mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Sorosās brain, then say āat time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire.ā
This could conceivably fail if thereās not enough pairs of dust specks in the universe to make the numbers work out. The āpure timeā mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.
...
I agree that it seems like thereās something around āhow complex is the mapping.ā I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition ā how do I know which pairs of dust specks should correspond to which neurons?
Hmm. Thanks for the example of the āpure timeā mapping of t --> mental states. Itās an interesting one. It reminds me of Max Tegmarkās mathematical universe hypothesis at ālevel 4,ā where, as far as I understand, all possible mathematical structures are taken to āexistā equally. This isnāt my current view, in part because Iām not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the āpure timeā mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Sorosās brain, then say āat time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire.ā
This could conceivably fail if thereās not enough pairs of dust specks in the universe to make the numbers work out. The āpure timeā mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.
...
I agree that it seems like thereās something around āhow complex is the mapping.ā I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition ā how do I know which pairs of dust specks should correspond to which neurons?