The following are some comments/questions for anyone interested.
Evolutionary theory suggests that insects will be selected to have emotions if the benefits of having them are greater than the costs of generating them. However, the costs appear to be heavy, and the benefits seem minimal.
Could anyone point me to the evidence +/- emotions being “computationally” expensive? Are emotions computations at all?
Nervous systems are very expensive for animals.
Given that we don’t know the nature of phenomenology (“Unfortunately, we don’t know how we generate emotions.”), maybe emotions (or at least simpler feelings) are energetically cheap and simple enough for evolution to select them early in the history of life? I would again appreciate relevant good literature pointers.
I do not see why robots couldn’t have an internal experience (i.e. feelings) if their artificial neural networks had functionally the same type of connections as we use to produce emotions.
If e.g. the unique valence properties of the carbon atom are part of how (phenomenally-bound) consciousness happens, then the classical artificial neural networks cannot be functionally the same in the relevant sense.
Also, how does the fact that digital computations are interpretation-depended affect the possibility of digital consciousness? (A tangentially relevant paper worth sharing is Universe creation on a computer by Gordon McCabe.)
The human brain is the ‘swiss army knife’ of brains; we can use our cognition to do almost anything. We are not especially speedy, but we can build cars. We’re not great swimmers (like a dolphin), but we can build boats. We can’t fly, but we can build planes.
We as a distributed “intelligence”, yes. One human cannot do these things. I find this quote from Magnus Vinding’s Reflections on Intelligence illuminating on the (off-)topic:
“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee. They are both equally unable to read and write on their own, not to mention building computers or flying to the moon. And this is also true if we compare a tribe of, say, thirty humans with a tribe of thirty chimpanzees. Such two tribes rule the Earth about equally little. What really separates humans from chimpanzees, however, is that humans have a much greater capacity for accumulating information, especially through language. And it is this – more precisely, millions of individuals cooperating with this, in itself humble and almost useless, ability – that enables humans to accomplish the things we erroneously identify with individual abilities: communicating with language, doing mathematics, uncovering physical laws, building things, etc. It is essentially this you can do with a human that you cannot do with a chimpanzee: train them to contribute modestly to society. To become a well-connected neuron in the collective human brain. Without the knowledge and tools of previous generations, humans are largely indistinguishable from chimpanzees.
I’m also not sure if we know how expensive emotions are. In particular, even if some emotions are complicated, I’m not sure if the basic conscious experience of pain is complicated (at least the affective part of the experience, maybe not the sensory part). It subjectively seems like quite a simple feeling, but I don’t know much about this, and I’d like to learn more.
Thank you for the work, Max (et al.)!
The following are some comments/questions for anyone interested.
Could anyone point me to the evidence +/- emotions being “computationally” expensive? Are emotions computations at all?
Given that we don’t know the nature of phenomenology (“Unfortunately, we don’t know how we generate emotions.”), maybe emotions (or at least simpler feelings) are energetically cheap and simple enough for evolution to select them early in the history of life? I would again appreciate relevant good literature pointers.
If e.g. the unique valence properties of the carbon atom are part of how (phenomenally-bound) consciousness happens, then the classical artificial neural networks cannot be functionally the same in the relevant sense.
Also, how does the fact that digital computations are interpretation-depended affect the possibility of digital consciousness? (A tangentially relevant paper worth sharing is Universe creation on a computer by Gordon McCabe.)
We as a distributed “intelligence”, yes. One human cannot do these things. I find this quote from Magnus Vinding’s Reflections on Intelligence illuminating on the (off-)topic:
Thanks! Good thoughts!
I’m also not sure if we know how expensive emotions are. In particular, even if some emotions are complicated, I’m not sure if the basic conscious experience of pain is complicated (at least the affective part of the experience, maybe not the sensory part). It subjectively seems like quite a simple feeling, but I don’t know much about this, and I’d like to learn more.