It happens that I also worry about digital suffering, but I have two great uncertainties:
Whether artificial consciousness is possible.
If 1 is possible, whether these beings can have the capacity for positive and negative experiences.
My uncertainty in 1 is much greater, like maybe 100x to 2. I wonder what your credence in artificial sentience is? It would be very useful for me if you can share. Am I right about my guess that you think, even after adjusting for the probability of creating digital beings vs probability of space factory farming, you still think the expected number of digital beings is still greater (or the ratio of moral significance)?
(btw, you might have realised I said digital beings instead of people, I cannot think of reasons why there will be digital people but not digital animals, unless the word “people included them)
I’m pretty confident artificial consciousness is possible, though I haven’t looked into it much. This is primarily because it seems like consciousness will be a property of the cognition, and independent of the substrate running that cognition.
As an intuition pump, suppose we understand in great detail the exact equations governing the firing of synapses in the brain, and we then recreate my brain in software using these equations. I claim that, given an environment that mimics the real world (i.e. inputs to the optic nerve that are identical to what the retina would have received, similarly for the other senses, and outputs to all of the muscles, including the tongue (for speech)), that the resulting system would do exactly what I would do (including e.g. saying that I am conscious when asked). It seems very likely that this system too is conscious.
(I’m also confident that digital beings can have the capacity for positive and negative experiences.)
If you ask me about particular digital “beings” (e.g. AI systems, databases, Google search), then I become a lot more uncertain about (1) and (2).
I am glad I sort of answered your question!
It happens that I also worry about digital suffering, but I have two great uncertainties:
Whether artificial consciousness is possible.
If 1 is possible, whether these beings can have the capacity for positive and negative experiences.
My uncertainty in 1 is much greater, like maybe 100x to 2. I wonder what your credence in artificial sentience is? It would be very useful for me if you can share. Am I right about my guess that you think, even after adjusting for the probability of creating digital beings vs probability of space factory farming, you still think the expected number of digital beings is still greater (or the ratio of moral significance)?
(btw, you might have realised I said digital beings instead of people, I cannot think of reasons why there will be digital people but not digital animals, unless the word “people included them)
I’m pretty confident artificial consciousness is possible, though I haven’t looked into it much. This is primarily because it seems like consciousness will be a property of the cognition, and independent of the substrate running that cognition.
As an intuition pump, suppose we understand in great detail the exact equations governing the firing of synapses in the brain, and we then recreate my brain in software using these equations. I claim that, given an environment that mimics the real world (i.e. inputs to the optic nerve that are identical to what the retina would have received, similarly for the other senses, and outputs to all of the muscles, including the tongue (for speech)), that the resulting system would do exactly what I would do (including e.g. saying that I am conscious when asked). It seems very likely that this system too is conscious.
(I’m also confident that digital beings can have the capacity for positive and negative experiences.)
If you ask me about particular digital “beings” (e.g. AI systems, databases, Google search), then I become a lot more uncertain about (1) and (2).