“Similarly, when you look at a color wheel, you can see how the color red slowly and continuously shades into orange when it is mixed with yellow.” -- but what is happening in the brain to produce this conscious sensation? As your gaze shifts along the color wheel, fewer and fewer of your retina’s red “cones” are being activated by photons from the color wheel, and more and more green “cones” are being activated, as the color shifts from red to orange towards yellow and green. Those retinal cells create digital-style, “spike trains” of neural firing signals. Surely the rest of your optical cortex learns whether a color is red, green, or in-between based on the ratio of red-cone neural spikes to green-cone neural spikes—essentially a digital signal.
Even if neural spike trains are in some magical way “not digital”, isn’t the light from the color wheel technically digital in the first place? Color, after all, comes from photons, and photons come in discrete, “quantized”, whole-number units. (Wavelength might truly be analogue, but this analog color info is lost when processed by the cones; you just get a “digitized” ratio of activations of different cones.)
You say that a computer would experience an emotion or sensation (like fear) as a binary, on/off boolean. But what about in a machine learning “neural net” design, where the millions of individual nodes of the network each have “weights” that are influenced by many factors? A node representing fearfulness might have ties to many other relevant nodes, perhaps activating at 80% of full strength when triggered by a nearby node indicating that the computer system is probably seeing a rattlesnake, only 53% when triggered by a different node indicating a harmless garter snake, and only 16% when seeing a potentially dangerous tool like a kitchen knife. The neural net system of weights, activation strengths, and associations between concepts, seems much closer to the “analog” human experience of thought, than it is to a pure logic-world where everything is either on or off.
For more info, I would encourage you to read about the many parallels between the human visual system and machine-learning “neural net” architectures. The two are very similar in places, so much so that I start to wonder if image-recognition programs can be said to have some kind of visual experience. (Obviously, image-recognition programs have no sense of self / ego / self-model, so even if it’s true that they have some kind of experience, it would be impossible for the program to have self-awareness in the way that a human does. The visual experience would be detached from a larger consciousness, a la “panpsychist” theories.)
I’ve been comparing human neural computation with machine-learning neural-net designs, but of course “digital people” would be designed to mimic human biology much more closely! To be clear, I’m still very uncertain whether “digital people” would be conscious or not. I just don’t think “digital versus analogue” is anywhere near a knockdown argument.
I agreed with your comment (I found it convincing) but downvoted it because if I was a first-time poster here, I would be much less likely to post again after having my first post characterized as foolish.
As one of many “naive functionalists”, I found the OP was very valuable as a challenge to my thinking, and so I want to come down strongly against discouraging such posts in any way.
I agree- the EA community claims to be “open to criticism” but having someone comment that a post is foolish on a first time poster’s well articulated and argued post is quite frankly really disappointing.
In addition, the poster is a professional and has valuable knowledge regardless of how you feel about the merits of their argument.
I’m a university student and run an EA group at my university. I really wish the community would be more open to professionals like this poster who aren’t affiliated with an EA organization, but can contribute different perspectives that aren’t as common within the community.
Ah, I didn’t notice that this was a totally new post! Honestly the writing style and polish felt like such a good match for the Forum, I assumed it must be coming from somebody who posts here all the time. In this new context—sorry, Marcusarvan, for immediately jumping onto a disagreement and not even noticing all the common ground evident via your writing style and EA interests!
This strikes me as very foolish. Consider:
“Similarly, when you look at a color wheel, you can see how the color red slowly and continuously shades into orange when it is mixed with yellow.” -- but what is happening in the brain to produce this conscious sensation? As your gaze shifts along the color wheel, fewer and fewer of your retina’s red “cones” are being activated by photons from the color wheel, and more and more green “cones” are being activated, as the color shifts from red to orange towards yellow and green. Those retinal cells create digital-style, “spike trains” of neural firing signals. Surely the rest of your optical cortex learns whether a color is red, green, or in-between based on the ratio of red-cone neural spikes to green-cone neural spikes—essentially a digital signal.
Even if neural spike trains are in some magical way “not digital”, isn’t the light from the color wheel technically digital in the first place? Color, after all, comes from photons, and photons come in discrete, “quantized”, whole-number units. (Wavelength might truly be analogue, but this analog color info is lost when processed by the cones; you just get a “digitized” ratio of activations of different cones.)
You say that a computer would experience an emotion or sensation (like fear) as a binary, on/off boolean. But what about in a machine learning “neural net” design, where the millions of individual nodes of the network each have “weights” that are influenced by many factors? A node representing fearfulness might have ties to many other relevant nodes, perhaps activating at 80% of full strength when triggered by a nearby node indicating that the computer system is probably seeing a rattlesnake, only 53% when triggered by a different node indicating a harmless garter snake, and only 16% when seeing a potentially dangerous tool like a kitchen knife. The neural net system of weights, activation strengths, and associations between concepts, seems much closer to the “analog” human experience of thought, than it is to a pure logic-world where everything is either on or off.
For more info, I would encourage you to read about the many parallels between the human visual system and machine-learning “neural net” architectures. The two are very similar in places, so much so that I start to wonder if image-recognition programs can be said to have some kind of visual experience. (Obviously, image-recognition programs have no sense of self / ego / self-model, so even if it’s true that they have some kind of experience, it would be impossible for the program to have self-awareness in the way that a human does. The visual experience would be detached from a larger consciousness, a la “panpsychist” theories.)
I’ve been comparing human neural computation with machine-learning neural-net designs, but of course “digital people” would be designed to mimic human biology much more closely! To be clear, I’m still very uncertain whether “digital people” would be conscious or not. I just don’t think “digital versus analogue” is anywhere near a knockdown argument.
I agreed with your comment (I found it convincing) but downvoted it because if I was a first-time poster here, I would be much less likely to post again after having my first post characterized as foolish.
As one of many “naive functionalists”, I found the OP was very valuable as a challenge to my thinking, and so I want to come down strongly against discouraging such posts in any way.
I agree- the EA community claims to be “open to criticism” but having someone comment that a post is foolish on a first time poster’s well articulated and argued post is quite frankly really disappointing.
In addition, the poster is a professional and has valuable knowledge regardless of how you feel about the merits of their argument.
I’m a university student and run an EA group at my university. I really wish the community would be more open to professionals like this poster who aren’t affiliated with an EA organization, but can contribute different perspectives that aren’t as common within the community.
Ah, I didn’t notice that this was a totally new post! Honestly the writing style and polish felt like such a good match for the Forum, I assumed it must be coming from somebody who posts here all the time. In this new context—sorry, Marcusarvan, for immediately jumping onto a disagreement and not even noticing all the common ground evident via your writing style and EA interests!