So two questions (please also see my reply to HjalmarWijk for context)::
Do you on these grounds think that insect suffering (and everything more exotic) is meaningless? Because our last common ancestor with insects hardly have any neurons, and unsurprisingly our neuronal architecture is very different, so there isn’t many reasons to expect any isomorphism between our “mental” processes.
Assuming an AI is sentient (in whatever sense you put into this word) but otherwise not meaningfully isomorphic to humans. How do you define “positive” inner life in that case?
In philosophy of mind the theory of functionalism defines mental states as causal structures. So for example, pain is the thing that usually causes withdrawal, avoidance, yelping, etc. and is often caused by e.g. tissue damage. If you see pain as the “tissue damage signaling” causal structure, then you could imagine insects also having this as well, even if there is no isomorphism. It’s hard to imagine AI systems having this, but you could more easily imagine AI systems having frustration, if you define it as “inability to attain goals and realization that such goals are not attained”. The idea of an isomorphism is required by the theory of machine functionalism, which essentially states that two feelings are the same if they are basically the same Turing machine running. But humans could be said to be running many Turing machines, and besides no two humans are running the same Turing machine, and comparing states across two Turing machines doesn’t really make sense. So I’m not very interested in this idea of strict isomorphism.
But I’m not fully onboard with functionalism of the more fuzzy/”squishy” kind either. I suppose something could have the same causal structures but not really “feel” anything. Maybe there is something to mind body materialism: for instance pain is merely a certain kind of neuron firing. In that case, we should have reason to doubt that insects suffer if they don’t have those neurons. I certainly am one to doubt that insects suffer, but on the more functionalist flavor of thinking I don’t. So I’m pretty agnostic. I’d imagine I might be similarly agnostic towards AI, and as such wouldn’t be in favor of handing over the future to them and away from humans, just as I’m not in favor of handing over the future to insects.
To answer the second question, I think of this in a functionalist way, so if something performs the same causal effects as positive mental states in humans, that’s a good reason to think it’s positive.
Why? As per instrumental convergence, any advanced AI is likely to have self-preservation and a negative reward signal it would receive upon a violation of such drive would be functionally very similar to pain (give or take the bodily component, but I don’t think it’s required? Otherwise simulate a million human minds in agony is OK, and I assume we agree it’s not). Likewise, any system with goal-directed agentic behavior would experience some reward from moving towards its goals, which seems functionally very similar to pleasure (or satisfaction or something along these lines).
I just think anguish is more likely than physical pain. I suppose there could be physical pain in a distributed system as a result of certain nodes going down.
It’s actually not obvious to me that simulations of humans could have physical pain. Seems possible, but maybe only other orders of pain like anguish and frustration are possible.
So two questions (please also see my reply to HjalmarWijk for context)::
Do you on these grounds think that insect suffering (and everything more exotic) is meaningless? Because our last common ancestor with insects hardly have any neurons, and unsurprisingly our neuronal architecture is very different, so there isn’t many reasons to expect any isomorphism between our “mental” processes.
Assuming an AI is sentient (in whatever sense you put into this word) but otherwise not meaningfully isomorphic to humans. How do you define “positive” inner life in that case?
In philosophy of mind the theory of functionalism defines mental states as causal structures. So for example, pain is the thing that usually causes withdrawal, avoidance, yelping, etc. and is often caused by e.g. tissue damage. If you see pain as the “tissue damage signaling” causal structure, then you could imagine insects also having this as well, even if there is no isomorphism. It’s hard to imagine AI systems having this, but you could more easily imagine AI systems having frustration, if you define it as “inability to attain goals and realization that such goals are not attained”. The idea of an isomorphism is required by the theory of machine functionalism, which essentially states that two feelings are the same if they are basically the same Turing machine running. But humans could be said to be running many Turing machines, and besides no two humans are running the same Turing machine, and comparing states across two Turing machines doesn’t really make sense. So I’m not very interested in this idea of strict isomorphism.
But I’m not fully onboard with functionalism of the more fuzzy/”squishy” kind either. I suppose something could have the same causal structures but not really “feel” anything. Maybe there is something to mind body materialism: for instance pain is merely a certain kind of neuron firing. In that case, we should have reason to doubt that insects suffer if they don’t have those neurons. I certainly am one to doubt that insects suffer, but on the more functionalist flavor of thinking I don’t. So I’m pretty agnostic. I’d imagine I might be similarly agnostic towards AI, and as such wouldn’t be in favor of handing over the future to them and away from humans, just as I’m not in favor of handing over the future to insects.
To answer the second question, I think of this in a functionalist way, so if something performs the same causal effects as positive mental states in humans, that’s a good reason to think it’s positive.
For more I recommend Amanda Askell’s blog post or Jaegwon Kim’s Philosophy of Mind textbook.
>It’s hard to imagine AI systems having this
Why? As per instrumental convergence, any advanced AI is likely to have self-preservation and a negative reward signal it would receive upon a violation of such drive would be functionally very similar to pain (give or take the bodily component, but I don’t think it’s required? Otherwise simulate a million human minds in agony is OK, and I assume we agree it’s not). Likewise, any system with goal-directed agentic behavior would experience some reward from moving towards its goals, which seems functionally very similar to pleasure (or satisfaction or something along these lines).
I just think anguish is more likely than physical pain. I suppose there could be physical pain in a distributed system as a result of certain nodes going down.
It’s actually not obvious to me that simulations of humans could have physical pain. Seems possible, but maybe only other orders of pain like anguish and frustration are possible.