Oops. I think I forgot to add a couple of lines, which might’ve made the argument harder to understand. I slightly updated the post, most of the added stuff is bold.
We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.
...
When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there’s something they’re feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, which we wouldn’t expect to happen randomly, in the absence of qualia.
But when we see animals that don’t talk about qualia, … [our] neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion.
Oops. I think I forgot to add a couple of lines, which might’ve made the argument harder to understand. I slightly updated the post, most of the added stuff is bold.