This is a very interesting and weird problem. It feels like the solution should have something to do with the computational complexity of the mapping? E.g. is it a mapping that could be calculated in polynomial or exponential time? If the mapping function is as expensive to compute as just simulating the brain in the first place, then the dust hasn’t really done any of the computational work.
Another way of looking at this: if you do take the dust argument seriously, why do you even need the dust at all? The mapping from dust to mental states exists in the space of mathematical functions, but so does the mapping from time straight to mental states, with no dust involved.
I guess the big question here is when does a sentient observer contained inside a mathematical function “exist”? What needs to happen in the physical universe for them to have experiences? That’s a really puzzling and interesting question.
Hmm. Thanks for the example of the “pure time” mapping of t --> mental states. It’s an interesting one. It reminds me of Max Tegmark’s mathematical universe hypothesis at “level 4,” where, as far as I understand, all possible mathematical structures are taken to “exist” equally. This isn’t my current view, in part because I’m not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the “pure time” mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros’s brain, then say “at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire.”
This could conceivably fail if there’s not enough pairs of dust specks in the universe to make the numbers work out. The “pure time” mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.
...
I agree that it seems like there’s something around “how complex is the mapping.” I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?
This is a very interesting and weird problem. It feels like the solution should have something to do with the computational complexity of the mapping? E.g. is it a mapping that could be calculated in polynomial or exponential time? If the mapping function is as expensive to compute as just simulating the brain in the first place, then the dust hasn’t really done any of the computational work.
Another way of looking at this: if you do take the dust argument seriously, why do you even need the dust at all? The mapping from dust to mental states exists in the space of mathematical functions, but so does the mapping from time straight to mental states, with no dust involved.
I guess the big question here is when does a sentient observer contained inside a mathematical function “exist”? What needs to happen in the physical universe for them to have experiences? That’s a really puzzling and interesting question.
Hmm. Thanks for the example of the “pure time” mapping of t --> mental states. It’s an interesting one. It reminds me of Max Tegmark’s mathematical universe hypothesis at “level 4,” where, as far as I understand, all possible mathematical structures are taken to “exist” equally. This isn’t my current view, in part because I’m not sure what it would mean to believe this.
I think the physical dust mapping is meaningfully different from the “pure time” mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros’s brain, then say “at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire.”
This could conceivably fail if there’s not enough pairs of dust specks in the universe to make the numbers work out. The “pure time” mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.
...
I agree that it seems like there’s something around “how complex is the mapping.” I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?