Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, “modeling stuff about yourself” in your brain) in a way that optimism/pessimism or pain-avoidance doesn’t. (Although wouldn’t a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don’t need extra “circuitry for experience” in order to turn visual-information-processing into an experience of vision. What would such extra circuitry even do, if not the visual information processing itself? (Seems like maybe you are a believer in what Daniel Dennet calls the “fallacy of the second transduction”?) Consequently, I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”! But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.
So, I tend to think that fish and other primitive creatures probably have “qualia”, including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it’s kind of just “suffering happening nowhere” or “an experience of suffering not connected to anything else”—the fish doesn’t know it’s a fish, doesn’t know that it’s suffering, etc, the fish is just generating some simple qualia that don’t really refer to anything or tie into a larger system. Whether you call such a disconnected & shallow experience “real qualia” or “real suffering” is a question of definitions.
I think this personal view of mine is fairly similar to Eliezer’s from the Sequences: there are no “zombies” (among humans or animals), there is no “second transduction” from neuron activity into a mythical medium-of-consciousness (no “extra circuitry for experience” needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia. So, animals and even simpler systems probably have qualia in some sense. But since animals aren’t self-aware (and/or have less self-awareness than humans), their qualia don’t matter (and/or matter less than humans’ qualia).
...Anyways, I think our core disagreement is that you seem to be equating “has a self-model” with “has qualia”, versus I think maybe qualia can and do exist even in very simple systems that lack a self-model. But I still think that having a self-model is morally important (atomic units of “suffering” that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it’s probably fine to eat fish.
I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake. I agree that I see a lot of people being confused and making mistakes, but I don’t think the problems are solved!
Qualia (IMO) certainly is “information processing”: there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I’m saying is that there’s information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.
I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”
Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans’ brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there’s no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you’ll experience seeing as before) is something you can make theories about but is not a valid inference, you don’t have a way of matching the computation of qualia to the whole of your brain.
And, how can you match it to matrix multiplications that don’t talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?
I’m not saying that qualia is solved. We don’t yet know how to build it, and we can’t yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could’ve.
And I’m not equating qualia to self-model. Qualia is just the experience of information. It doesn’t required a self-model, also on Earth, so far, I expect these things to have been correlated.
If there’s suffering and experience of extreme pain, in my opinion, it matters even if there isn’t reflectivity.
Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, “modeling stuff about yourself” in your brain) in a way that optimism/pessimism or pain-avoidance doesn’t. (Although wouldn’t a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don’t need extra “circuitry for experience” in order to turn visual-information-processing into an experience of vision. What would such extra circuitry even do, if not the visual information processing itself? (Seems like maybe you are a believer in what Daniel Dennet calls the “fallacy of the second transduction”?)
Consequently, I think it’s likely that even simple “RL algorithms” might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated “experiences of vision”! But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.
So, I tend to think that fish and other primitive creatures probably have “qualia”, including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it’s kind of just “suffering happening nowhere” or “an experience of suffering not connected to anything else”—the fish doesn’t know it’s a fish, doesn’t know that it’s suffering, etc, the fish is just generating some simple qualia that don’t really refer to anything or tie into a larger system. Whether you call such a disconnected & shallow experience “real qualia” or “real suffering” is a question of definitions.
I think this personal view of mine is fairly similar to Eliezer’s from the Sequences: there are no “zombies” (among humans or animals), there is no “second transduction” from neuron activity into a mythical medium-of-consciousness (no “extra circuitry for experience” needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia. So, animals and even simpler systems probably have qualia in some sense. But since animals aren’t self-aware (and/or have less self-awareness than humans), their qualia don’t matter (and/or matter less than humans’ qualia).
...Anyways, I think our core disagreement is that you seem to be equating “has a self-model” with “has qualia”, versus I think maybe qualia can and do exist even in very simple systems that lack a self-model. But I still think that having a self-model is morally important (atomic units of “suffering” that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it’s probably fine to eat fish.
I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake. I agree that I see a lot of people being confused and making mistakes, but I don’t think the problems are solved!
I appreciate this comment.
Qualia (IMO) certainly is “information processing”: there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I’m saying is that there’s information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.
Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans’ brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there’s no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you’ll experience seeing as before) is something you can make theories about but is not a valid inference, you don’t have a way of matching the computation of qualia to the whole of your brain.
And, how can you match it to matrix multiplications that don’t talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?
I’m not saying that qualia is solved. We don’t yet know how to build it, and we can’t yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could’ve.
And I’m not equating qualia to self-model. Qualia is just the experience of information. It doesn’t required a self-model, also on Earth, so far, I expect these things to have been correlated.
If there’s suffering and experience of extreme pain, in my opinion, it matters even if there isn’t reflectivity.