This house believes that if digital minds are built, they will:
be conscious
experience valence (i.e., pleasure and/or pain)
I think this is an important debate to have because, as has been pointed out here and here, EA seems to largely ignore prioritization considerations around digital sentience and suffering risk.[1]
To argue against the motion, I suggest David Pearce: see his view explained here. To argue for the motion, maybe—aiming high—David Chalmers: see his position outlined here.
See the linked posts’ bullet points titled “I think EA ignores digital sentience too much,” and “Suffering-focused longtermism stuff seems weirdly sidelined,” respectively.
This house believes that if digital minds are built, they will:
be conscious
experience valence (i.e., pleasure and/or pain)
I think this is an important debate to have because, as has been pointed out here and here, EA seems to largely ignore prioritization considerations around digital sentience and suffering risk.[1]
To argue against the motion, I suggest David Pearce: see his view explained here. To argue for the motion, maybe—aiming high—David Chalmers: see his position outlined here.
See the linked posts’ bullet points titled “I think EA ignores digital sentience too much,” and “Suffering-focused longtermism stuff seems weirdly sidelined,” respectively.
I think “digital minds can’t be conscious” is an uncommon position among EAs
Maybe Rob Long?