(I’m sorry if this comment is not accessible to many people. I also might have missed something in reading the paper, please do let me know if that’s the case!)
I do think many EAs who haven’t studied philosophy of mind probably (implicitly) believe functionalism a bit too credulously. It doesn’t matter very much right now, but maybe it will later.
But I’m not really convinced by this paper. I skimmed it and here are some initial thoughts:
A TV screen flickers between frames. If you watch an old or broken TV, you’ll notice this, but once the frame rate is high enough, you experience it continuously. Of course, it’s not actually continuous. A similar phenomenon occurs with pixels: they are discrete, but we experience them continuously.
You say that light is continuous, not discrete, but light (and every other elementary particle) behaves as discrete packets (photons) as well as waves. This makes me wonder if there is a real difference between analog and digital at extremely high levels of fidelity.
You give the example of mechanical watches, but I’m pretty sure the ones that seem continuous (i.e. have “sweeping hands” rather than ticking) are actually just higher frequency, and still move discretely rather than continuously. See here. Again, we experience them continously.
You mention hue, saturation, and brightness. We represent these in a computer just fine with only 24 (8 for each) bits most of the time, and we still get most of the same experiences. There are higher-fidelity color schemes but we barely notice this.
You argue in the paper that neurons are analog rather than digital. I agree that neurons are not simply on/off. But again, does this show anything about whether you can properly represent a neuron if you just allocate more than one bit to it? Something I find particularly problematic is that the evidence that neurons are analog presumably came from measuring action potentials and analyzing them. But aren’t the action potential measurements not represented digitally in a computer? How could that show evidence of analog behavior?
In the paper you have this caveat (and further explanation), which seems to dismantle many of my objections above:
On the Lewis-Maley view we adopt, to be analog does not require continuity, but only monotonic covariation in magnitude between the representation and what is represented. That increase or decrease can happen in steps, or it can happen smoothly (i.e., discretely or continuously). For example, consider the fuel gauges found in cars. Older cars often have a physical dial that more-or-less continuously moves from ‘F’ to ‘E’ as fuel is consumed; this dial is an analog representation of the amount of fuel in the car, because as fuel decreases, so does the literal angle of the dial. Newer cars often have a different way of displaying fuel. Instead of a physical dial, fuel is displayed as a bar graph on an LED or LCD display. Importantly, that bar graph is composed of discrete segments. Nevertheless, this is still an analog representation of the amount of fuel: as fuel decreases, the number of segments decreases. In contrast, we can imagine a fuel gauge that simply displays the number of gallons (or liters) of fuel in the tank (e.g., ‘6.5’ for six and a half gallons). Here, as fuel decreases, the digits displayed do not increase or decrease (the way they would if we changed the font in a paper): they simply change.
The suggestion here is that if we encode something in unary (number of on bits = magnitude) it is qualitatively different from being encoded in binary or decimal. This is not a straightforward claim. In your paper, it relies on panpsychism with microexperience: roughly speaking, the idea that consciousness arises from microconsciousness of fundamental particles. I think you’ve done a pretty decent job of arguing the conclusion given that premise, but I find the premise pretty unconvincing myself, and anyone else who does is similarly unlikely to be convinced by your argument.
TLDR: Yes, the magnitude of discrete representations of bits is completely contingent and arbitrary. But to say that two functionally identical organisms that are also isomorphic (usually more than we could ever ask for) are different in terms of the consciousness they produce seems to require some kind of microphenomenal laws. If you don’t believe in such laws, you shouldn’t believe this argument.
Thanks for the post—I’ve encountered this “consciousness must arise from an analog substrate” view before in places like this conversation with Magnus Vinding and David Pearce, and am interested in understanding it better.
I don’t think I really follow the argument for this view, but even granting that consciousness requires an analog substrate, would that change our priorities? It seems as though those who want to create artificial sentience (including conscious uploads of human minds) would just use analog computers instead. I suppose if you’re imagining a future in which artificial sentience might arise and have transformative effects before other forms of transformative AI, this consideration could be important as it would take time and perhaps be intractable for now to develop analog computers competitive with digital ones.
But assuming high-fidelity digital mind emulations are essentially p-zombies and aren’t behaviourally distinguishable from conscious minds, I think there’s only a few ways your argument would only have strategic relevance for us, none of which seem super compelling to me.
It could be that we should expect people to be comfortable being uploaded as digital minds in worlds where digital minds are in fact conscious, but not comfortable with this otherwise. I don’t think the public is good enough at philosophy of mind that this would hold!
We could be concerned that the first few uploads created before we develop a better understanding of consciousness were made on a mistaken assumption they would have subjective experience, and are not having the (hopefully happy) lives we wanted them to have, but this seems pretty low-stakes to me.
There might be path-dependencies in what human-originating civilization winds up valuing, and unless we adopt the view that consciousness requires analog substrates ahead of creating supposedly-conscious digital minds, we are at greater risk for ending up with the “wrong” moral valuation.
Maybe it is important for us to have a very clear understanding of consciousness, and this is a key component of that. (But I would be wary about backfire risk: I expect in the current moment advancing our understanding of consciousness is slightly negative for the reasons discussed here.)
It’s not clear to me why whether a system is composed of continuous functions or digital approximations of continuous functions would be a deciding factor between whether said system is conscious or not. In fact, there is empirical evidence that digital approximations of continuous functions can be part of conscious systems.
People currently have digital functions as part of their brains (brain controlled robot arms, cochlear implants, retinal prostheses). In the case of brain controlled arms, some arms are well integrated into the conscious self. For example, Nathan Copeland’s robot arm is incorporated into his proprioceptive map via force feedback and position sensors. It has been experimentally confirmed that he can sense the arm’s position without visual contact and he describes this sense as proprioceptive.
Additionally, computers can be composed of substantially analog functions (e.g. systems made from analog electrical components—these are just as analog as neurons, although both systems deal in electric charge which is quantized).
The following link goes to this post rather than the paper you mention:
For reasons just given, I think we should be far more skeptical than some longtermists are. For more, see this paper on simulation theory by me and my co-author Micah Summers in Australasian Journal of Philosophy.
I think that digital systems cannot have consciousness, for the same reason as biological systems can’t: consciousness, contrary to your intuition probably can’t exist.
More specifically, Jacy Anthis argued, to my eyes successfully, that even an entity, in principle omniscient and all-knowing could not discover it, because consciousness is subject to a motte-and-bailey fallacy where the bailey is the imprecise claim, and the motte is the precise claim. Given it’s inconsistency, we can realize it cannot exist, even in the mathematical multiverse.
But, as we have seen, consciousness appears to be analog too. ‘Red’ and ‘orange’ are not merely ‘on’ or ‘off’, like a ‘1’ or a ‘zero.’ Red and orange come in degrees, like Mercury expanding in a thermometer. Sadness, joy, fear, love. None of these features of consciousness are merely ‘on’ or ‘off’ like a one or a zero. They too come in degrees, like the turning of the gears of a watch.
Do you think that the analog aspects of neuron function help explain the fact that we think consciousness appears to be analog, or am I misunderstanding the point?
(My intuition is that it would be quite hard to tell a story in which, say, the varied electromagnetic effects of neurons on each other help to explain why red and orange seem to come in degrees.)
Probably some mind-simulations aren’t conscious—it would feel weird to me if a system that was mostly a lookup table was conscious. But there are probably astronomically more efficient ways to get qualia from atoms than biological humans, and there are some reasons to believe that such organizations of matter matter a lot for the long-term future.
Marcus—I worry that the intuition that ‘consciousness arises more easily from analogy than from digital processing’ it utterly unreliable, and depends on humans getting very confused about how to square their intuitive physics module with their intuitive psychology module.
Specifically, our intuitive physics module gives us the sense that ‘analog stuff’ is smooth, creamy, fine-grained, subtle, gentle, etc—and our intuitive psychology module says ‘Oh yes! That’s just like my smooth, creamy, fine-grained, subtle, gentle consciousness!’
By contrast, we have a stereotype of ‘digital stuff’ as brittle, jarring, binary, coarse-grained, simplistic, etc, and imagine that such a fine thing as human consciousness could never arise from such a substrate.
As someone who’s taken an evolutionary functional view of the human mind for many decades of research, I simply don’t trust these ‘philosophy of mind’ intuitions at all.
As some of the other comments here have said, a large deep learning network can be described in quite ‘analog’ terms, or in quite ‘digital’ terms, and neither is very informative about whether the network could actually give rise to sentient experience.
First I think basing your evidence on the impression of consciousness being a continuous process is very misguiding. Also if you go on the scale of neurons being analog, if you go on the scale of transistor and even more so memristor you could say that the process inside them are stream of electrons continuous in nature.
Second, I have a theory about the nature of consciousness and the universe that if true would make any digital mind conscious. While I do not have evidence on the subject apart of some concording observations I think it would be interesting to expose it there.
I am going to build my theory on a few axioms and work from there.
Mathematics exist
What this means is that Mathematics exist in a way that is as real as atoms (and in fact more real in a way). My only proof to that is that Mathematics works and impact our world in a very real and tangible way.
By Mathematics I’ll define it as the set which contain every mathematical objects, including algorithms.
If Mathematics exists then our universe exist.
Because Mathematics exist, this means that if there was an application that emulate perfectly our universe, then our universe would really exist.
But our universe is not any kind of universe
Indeed there is two elements at play here. First and foremost our universe must hold intelligent conscious life, otherwise we would not exist in it. Second, not all mathematical objects in our grand Mathematics set are equals. For instance in our set there is pi, but you would also find pi, every time you find a circle. Meaning that the “number” of pi in your set is much greater than one (I know not very rigorous of a demonstration process but I am just exposing an idea). The idea is that in your Mathematics set your objects count as many time as they appear in subsets or as properties in other mathematical objects.
Which means that if your consciousness can go in an equiprobable way into any universe where consciousness if will pick one of the “simpler” ones. Because “simpler” mathematical objects will appear more time into other more complex mathematicals objects.
So with that our universe would be among the simplest universe with consciousness in it. That would explain the Big Bang, because a simple start to our universe made the following complexity more “likely”. It would explain why our universe possess 3 spatial dimensions for instance: 1 and 2 dimensions is not enough for complex life, 4 and higher make the universe too unlikely for us to exist in it.
Your stable consciousness exist because the brain is equivalent to an application which perfectly describe your consciousness
So, your consciousness is actually a continuous application that at any moment gives you “your state of consciousness” at an instant. What I mean is that your “state of consciousness” for lack of better word is actually a mathematical element, perhaps in a mathematical structure with a neutral element (neutral element = death).
This application would be much more complex than the equation describing our universe, if that the case then your consciousness application would be much more likely to exist as the product of a simpler mathematical element (the universe) than as a disincarnated incoherrent consciousness.
This also means that if brains can be equivalent to your consciousness application, then the same application by a computer would make no difference.
How to prove it ? Well if physicist find a single mathematical function capable of emulating the universe perfectly (scale not withstanding), it would be a clear indicator of me being right. Also someone more intelligent than me might be able to prove it more toroughly like a math theorem (since the idea is pretty much that everything is math).
I took a few years to come up with the idea, and I think I should make it into its own topic. My explanation might not be very clear and english is not my mother tongue. (I went kinda off topic but it was fun)
“Similarly, when you look at a color wheel, you can see how the color red slowly and continuously shades into orange when it is mixed with yellow.” -- but what is happening in the brain to produce this conscious sensation? As your gaze shifts along the color wheel, fewer and fewer of your retina’s red “cones” are being activated by photons from the color wheel, and more and more green “cones” are being activated, as the color shifts from red to orange towards yellow and green. Those retinal cells create digital-style, “spike trains” of neural firing signals. Surely the rest of your optical cortex learns whether a color is red, green, or in-between based on the ratio of red-cone neural spikes to green-cone neural spikes—essentially a digital signal.
Even if neural spike trains are in some magical way “not digital”, isn’t the light from the color wheel technically digital in the first place? Color, after all, comes from photons, and photons come in discrete, “quantized”, whole-number units. (Wavelength might truly be analogue, but this analog color info is lost when processed by the cones; you just get a “digitized” ratio of activations of different cones.)
You say that a computer would experience an emotion or sensation (like fear) as a binary, on/off boolean. But what about in a machine learning “neural net” design, where the millions of individual nodes of the network each have “weights” that are influenced by many factors? A node representing fearfulness might have ties to many other relevant nodes, perhaps activating at 80% of full strength when triggered by a nearby node indicating that the computer system is probably seeing a rattlesnake, only 53% when triggered by a different node indicating a harmless garter snake, and only 16% when seeing a potentially dangerous tool like a kitchen knife. The neural net system of weights, activation strengths, and associations between concepts, seems much closer to the “analog” human experience of thought, than it is to a pure logic-world where everything is either on or off.
For more info, I would encourage you to read about the many parallels between the human visual system and machine-learning “neural net” architectures. The two are very similar in places, so much so that I start to wonder if image-recognition programs can be said to have some kind of visual experience. (Obviously, image-recognition programs have no sense of self / ego / self-model, so even if it’s true that they have some kind of experience, it would be impossible for the program to have self-awareness in the way that a human does. The visual experience would be detached from a larger consciousness, a la “panpsychist” theories.)
I’ve been comparing human neural computation with machine-learning neural-net designs, but of course “digital people” would be designed to mimic human biology much more closely! To be clear, I’m still very uncertain whether “digital people” would be conscious or not. I just don’t think “digital versus analogue” is anywhere near a knockdown argument.
I agreed with your comment (I found it convincing) but downvoted it because if I was a first-time poster here, I would be much less likely to post again after having my first post characterized as foolish.
As one of many “naive functionalists”, I found the OP was very valuable as a challenge to my thinking, and so I want to come down strongly against discouraging such posts in any way.
I agree- the EA community claims to be “open to criticism” but having someone comment that a post is foolish on a first time poster’s well articulated and argued post is quite frankly really disappointing.
In addition, the poster is a professional and has valuable knowledge regardless of how you feel about the merits of their argument.
I’m a university student and run an EA group at my university. I really wish the community would be more open to professionals like this poster who aren’t affiliated with an EA organization, but can contribute different perspectives that aren’t as common within the community.
Ah, I didn’t notice that this was a totally new post! Honestly the writing style and polish felt like such a good match for the Forum, I assumed it must be coming from somebody who posts here all the time. In this new context—sorry, Marcusarvan, for immediately jumping onto a disagreement and not even noticing all the common ground evident via your writing style and EA interests!
(I’m sorry if this comment is not accessible to many people. I also might have missed something in reading the paper, please do let me know if that’s the case!)
I do think many EAs who haven’t studied philosophy of mind probably (implicitly) believe functionalism a bit too credulously. It doesn’t matter very much right now, but maybe it will later.
But I’m not really convinced by this paper. I skimmed it and here are some initial thoughts:
A TV screen flickers between frames. If you watch an old or broken TV, you’ll notice this, but once the frame rate is high enough, you experience it continuously. Of course, it’s not actually continuous. A similar phenomenon occurs with pixels: they are discrete, but we experience them continuously.
You say that light is continuous, not discrete, but light (and every other elementary particle) behaves as discrete packets (photons) as well as waves. This makes me wonder if there is a real difference between analog and digital at extremely high levels of fidelity.
You give the example of mechanical watches, but I’m pretty sure the ones that seem continuous (i.e. have “sweeping hands” rather than ticking) are actually just higher frequency, and still move discretely rather than continuously. See here. Again, we experience them continously.
You mention hue, saturation, and brightness. We represent these in a computer just fine with only 24 (8 for each) bits most of the time, and we still get most of the same experiences. There are higher-fidelity color schemes but we barely notice this.
You argue in the paper that neurons are analog rather than digital. I agree that neurons are not simply on/off. But again, does this show anything about whether you can properly represent a neuron if you just allocate more than one bit to it? Something I find particularly problematic is that the evidence that neurons are analog presumably came from measuring action potentials and analyzing them. But aren’t the action potential measurements not represented digitally in a computer? How could that show evidence of analog behavior?
In the paper you have this caveat (and further explanation), which seems to dismantle many of my objections above:
The suggestion here is that if we encode something in unary (number of on bits = magnitude) it is qualitatively different from being encoded in binary or decimal. This is not a straightforward claim. In your paper, it relies on panpsychism with microexperience: roughly speaking, the idea that consciousness arises from microconsciousness of fundamental particles. I think you’ve done a pretty decent job of arguing the conclusion given that premise, but I find the premise pretty unconvincing myself, and anyone else who does is similarly unlikely to be convinced by your argument.
TLDR: Yes, the magnitude of discrete representations of bits is completely contingent and arbitrary. But to say that two functionally identical organisms that are also isomorphic (usually more than we could ever ask for) are different in terms of the consciousness they produce seems to require some kind of microphenomenal laws. If you don’t believe in such laws, you shouldn’t believe this argument.
Thanks for the post—I’ve encountered this “consciousness must arise from an analog substrate” view before in places like this conversation with Magnus Vinding and David Pearce, and am interested in understanding it better.
I don’t think I really follow the argument for this view, but even granting that consciousness requires an analog substrate, would that change our priorities? It seems as though those who want to create artificial sentience (including conscious uploads of human minds) would just use analog computers instead. I suppose if you’re imagining a future in which artificial sentience might arise and have transformative effects before other forms of transformative AI, this consideration could be important as it would take time and perhaps be intractable for now to develop analog computers competitive with digital ones.
But assuming high-fidelity digital mind emulations are essentially p-zombies and aren’t behaviourally distinguishable from conscious minds, I think there’s only a few ways your argument would only have strategic relevance for us, none of which seem super compelling to me.
It could be that we should expect people to be comfortable being uploaded as digital minds in worlds where digital minds are in fact conscious, but not comfortable with this otherwise. I don’t think the public is good enough at philosophy of mind that this would hold!
We could be concerned that the first few uploads created before we develop a better understanding of consciousness were made on a mistaken assumption they would have subjective experience, and are not having the (hopefully happy) lives we wanted them to have, but this seems pretty low-stakes to me.
There might be path-dependencies in what human-originating civilization winds up valuing, and unless we adopt the view that consciousness requires analog substrates ahead of creating supposedly-conscious digital minds, we are at greater risk for ending up with the “wrong” moral valuation.
Maybe it is important for us to have a very clear understanding of consciousness, and this is a key component of that. (But I would be wary about backfire risk: I expect in the current moment advancing our understanding of consciousness is slightly negative for the reasons discussed here.)
It’s not clear to me why whether a system is composed of continuous functions or digital approximations of continuous functions would be a deciding factor between whether said system is conscious or not. In fact, there is empirical evidence that digital approximations of continuous functions can be part of conscious systems.
People currently have digital functions as part of their brains (brain controlled robot arms, cochlear implants, retinal prostheses). In the case of brain controlled arms, some arms are well integrated into the conscious self. For example, Nathan Copeland’s robot arm is incorporated into his proprioceptive map via force feedback and position sensors. It has been experimentally confirmed that he can sense the arm’s position without visual contact and he describes this sense as proprioceptive.
Additionally, computers can be composed of substantially analog functions (e.g. systems made from analog electrical components—these are just as analog as neurons, although both systems deal in electric charge which is quantized).
The following link goes to this post rather than the paper you mention:
Thanks—fixed!
I think that digital systems cannot have consciousness, for the same reason as biological systems can’t: consciousness, contrary to your intuition probably can’t exist.
More specifically, Jacy Anthis argued, to my eyes successfully, that even an entity, in principle omniscient and all-knowing could not discover it, because consciousness is subject to a motte-and-bailey fallacy where the bailey is the imprecise claim, and the motte is the precise claim. Given it’s inconsistency, we can realize it cannot exist, even in the mathematical multiverse.
A link to this view is here:
https://www.sentienceinstitute.org/blog/consciousness-semanticism
If you think this is useful, then you might consider an argument against identity existing as well, from Andreas Mogensen.
Link here:
https://onlinelibrary.wiley.com/doi/full/10.1111/ejop.12552
Do you think that the analog aspects of neuron function help explain the fact that we think consciousness appears to be analog, or am I misunderstanding the point?
(My intuition is that it would be quite hard to tell a story in which, say, the varied electromagnetic effects of neurons on each other help to explain why red and orange seem to come in degrees.)
Probably some mind-simulations aren’t conscious—it would feel weird to me if a system that was mostly a lookup table was conscious. But there are probably astronomically more efficient ways to get qualia from atoms than biological humans, and there are some reasons to believe that such organizations of matter matter a lot for the long-term future.
Marcus—I worry that the intuition that ‘consciousness arises more easily from analogy than from digital processing’ it utterly unreliable, and depends on humans getting very confused about how to square their intuitive physics module with their intuitive psychology module.
Specifically, our intuitive physics module gives us the sense that ‘analog stuff’ is smooth, creamy, fine-grained, subtle, gentle, etc—and our intuitive psychology module says ‘Oh yes! That’s just like my smooth, creamy, fine-grained, subtle, gentle consciousness!’
By contrast, we have a stereotype of ‘digital stuff’ as brittle, jarring, binary, coarse-grained, simplistic, etc, and imagine that such a fine thing as human consciousness could never arise from such a substrate.
As someone who’s taken an evolutionary functional view of the human mind for many decades of research, I simply don’t trust these ‘philosophy of mind’ intuitions at all.
As some of the other comments here have said, a large deep learning network can be described in quite ‘analog’ terms, or in quite ‘digital’ terms, and neither is very informative about whether the network could actually give rise to sentient experience.
First I think basing your evidence on the impression of consciousness being a continuous process is very misguiding. Also if you go on the scale of neurons being analog, if you go on the scale of transistor and even more so memristor you could say that the process inside them are stream of electrons continuous in nature.
Second, I have a theory about the nature of consciousness and the universe that if true would make any digital mind conscious. While I do not have evidence on the subject apart of some concording observations I think it would be interesting to expose it there.
I am going to build my theory on a few axioms and work from there.
Mathematics exist
What this means is that Mathematics exist in a way that is as real as atoms (and in fact more real in a way). My only proof to that is that Mathematics works and impact our world in a very real and tangible way.
By Mathematics I’ll define it as the set which contain every mathematical objects, including algorithms.
If Mathematics exists then our universe exist.
Because Mathematics exist, this means that if there was an application that emulate perfectly our universe, then our universe would really exist.
But our universe is not any kind of universe
Indeed there is two elements at play here. First and foremost our universe must hold intelligent conscious life, otherwise we would not exist in it. Second, not all mathematical objects in our grand Mathematics set are equals. For instance in our set there is pi, but you would also find pi, every time you find a circle. Meaning that the “number” of pi in your set is much greater than one (I know not very rigorous of a demonstration process but I am just exposing an idea). The idea is that in your Mathematics set your objects count as many time as they appear in subsets or as properties in other mathematical objects.
Which means that if your consciousness can go in an equiprobable way into any universe where consciousness if will pick one of the “simpler” ones. Because “simpler” mathematical objects will appear more time into other more complex mathematicals objects.
So with that our universe would be among the simplest universe with consciousness in it. That would explain the Big Bang, because a simple start to our universe made the following complexity more “likely”. It would explain why our universe possess 3 spatial dimensions for instance: 1 and 2 dimensions is not enough for complex life, 4 and higher make the universe too unlikely for us to exist in it.
Your stable consciousness exist because the brain is equivalent to an application which perfectly describe your consciousness
So, your consciousness is actually a continuous application that at any moment gives you “your state of consciousness” at an instant. What I mean is that your “state of consciousness” for lack of better word is actually a mathematical element, perhaps in a mathematical structure with a neutral element (neutral element = death).
This application would be much more complex than the equation describing our universe, if that the case then your consciousness application would be much more likely to exist as the product of a simpler mathematical element (the universe) than as a disincarnated incoherrent consciousness.
This also means that if brains can be equivalent to your consciousness application, then the same application by a computer would make no difference.
How to prove it ? Well if physicist find a single mathematical function capable of emulating the universe perfectly (scale not withstanding), it would be a clear indicator of me being right. Also someone more intelligent than me might be able to prove it more toroughly like a math theorem (since the idea is pretty much that everything is math).
I took a few years to come up with the idea, and I think I should make it into its own topic. My explanation might not be very clear and english is not my mother tongue. (I went kinda off topic but it was fun)
Not a topic worth debating, waste of time at this point
This strikes me as very foolish. Consider:
“Similarly, when you look at a color wheel, you can see how the color red slowly and continuously shades into orange when it is mixed with yellow.” -- but what is happening in the brain to produce this conscious sensation? As your gaze shifts along the color wheel, fewer and fewer of your retina’s red “cones” are being activated by photons from the color wheel, and more and more green “cones” are being activated, as the color shifts from red to orange towards yellow and green. Those retinal cells create digital-style, “spike trains” of neural firing signals. Surely the rest of your optical cortex learns whether a color is red, green, or in-between based on the ratio of red-cone neural spikes to green-cone neural spikes—essentially a digital signal.
Even if neural spike trains are in some magical way “not digital”, isn’t the light from the color wheel technically digital in the first place? Color, after all, comes from photons, and photons come in discrete, “quantized”, whole-number units. (Wavelength might truly be analogue, but this analog color info is lost when processed by the cones; you just get a “digitized” ratio of activations of different cones.)
You say that a computer would experience an emotion or sensation (like fear) as a binary, on/off boolean. But what about in a machine learning “neural net” design, where the millions of individual nodes of the network each have “weights” that are influenced by many factors? A node representing fearfulness might have ties to many other relevant nodes, perhaps activating at 80% of full strength when triggered by a nearby node indicating that the computer system is probably seeing a rattlesnake, only 53% when triggered by a different node indicating a harmless garter snake, and only 16% when seeing a potentially dangerous tool like a kitchen knife. The neural net system of weights, activation strengths, and associations between concepts, seems much closer to the “analog” human experience of thought, than it is to a pure logic-world where everything is either on or off.
For more info, I would encourage you to read about the many parallels between the human visual system and machine-learning “neural net” architectures. The two are very similar in places, so much so that I start to wonder if image-recognition programs can be said to have some kind of visual experience. (Obviously, image-recognition programs have no sense of self / ego / self-model, so even if it’s true that they have some kind of experience, it would be impossible for the program to have self-awareness in the way that a human does. The visual experience would be detached from a larger consciousness, a la “panpsychist” theories.)
I’ve been comparing human neural computation with machine-learning neural-net designs, but of course “digital people” would be designed to mimic human biology much more closely! To be clear, I’m still very uncertain whether “digital people” would be conscious or not. I just don’t think “digital versus analogue” is anywhere near a knockdown argument.
I agreed with your comment (I found it convincing) but downvoted it because if I was a first-time poster here, I would be much less likely to post again after having my first post characterized as foolish.
As one of many “naive functionalists”, I found the OP was very valuable as a challenge to my thinking, and so I want to come down strongly against discouraging such posts in any way.
I agree- the EA community claims to be “open to criticism” but having someone comment that a post is foolish on a first time poster’s well articulated and argued post is quite frankly really disappointing.
In addition, the poster is a professional and has valuable knowledge regardless of how you feel about the merits of their argument.
I’m a university student and run an EA group at my university. I really wish the community would be more open to professionals like this poster who aren’t affiliated with an EA organization, but can contribute different perspectives that aren’t as common within the community.
Ah, I didn’t notice that this was a totally new post! Honestly the writing style and polish felt like such a good match for the Forum, I assumed it must be coming from somebody who posts here all the time. In this new context—sorry, Marcusarvan, for immediately jumping onto a disagreement and not even noticing all the common ground evident via your writing style and EA interests!