A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
On relatively more stable ground wrt power to choose between a world optimized for insects vs humans, I’m happy to report I’m a humanity partisan. :-)
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
I think it’s very likely that we can stack the deck in favour of positive welfare, even if it’s still close to 0 due to low expected probability of consciousness or low moral weight. There are individual differences in average hedonic setpoints between humans that are influenced genetically and some extreme cases like Jo Cameron. The systems for pleasure and suffering don’t overlap fully, so we could cut parts selectively devoted to suffering out or reduce their sensitivity.
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
I agree with many sophisticated behaviors being well within range of reach for unconscious systems, but it’s not clear this counts that much against invertebrates. You can also come to it from the other side: it’s hard to pick out capacities that humans have that are with very high probability necessary for consciousness (e.g. theory of mind, self-awareness to the level of passing the mirror test and the capacity for verbal report don’t seem necessary), but aren’t present in (some) invertebrates. I’d recommend Rethink Priorities’ work on this topic (disclaimer: I work there, but didn’t work on this, and am not speaking for Rethink Priorities) and Luke Muehlhauser’s report for Open Phil.
Also, at what point would you start to worry about ML (or other AI) systems being conscious, especially ones that aren’t capable of verbal report?
Completely agree it is difficult to find “uniquely human” behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don’t rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don’t really satisfy what I think makes sense to call consciousness. I’m thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there’s conscious elements there.
IOW, I’m not a “scale is all you need” person—I don’t think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that’s just sort of another way of saying it isn’t. :-) The sort of “self-talk” modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.
A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
On relatively more stable ground wrt power to choose between a world optimized for insects vs humans, I’m happy to report I’m a humanity partisan. :-)
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
I think it’s very likely that we can stack the deck in favour of positive welfare, even if it’s still close to 0 due to low expected probability of consciousness or low moral weight. There are individual differences in average hedonic setpoints between humans that are influenced genetically and some extreme cases like Jo Cameron. The systems for pleasure and suffering don’t overlap fully, so we could cut parts selectively devoted to suffering out or reduce their sensitivity.
I agree with many sophisticated behaviors being well within range of reach for unconscious systems, but it’s not clear this counts that much against invertebrates. You can also come to it from the other side: it’s hard to pick out capacities that humans have that are with very high probability necessary for consciousness (e.g. theory of mind, self-awareness to the level of passing the mirror test and the capacity for verbal report don’t seem necessary), but aren’t present in (some) invertebrates. I’d recommend Rethink Priorities’ work on this topic (disclaimer: I work there, but didn’t work on this, and am not speaking for Rethink Priorities) and Luke Muehlhauser’s report for Open Phil.
Also, at what point would you start to worry about ML (or other AI) systems being conscious, especially ones that aren’t capable of verbal report?
Completely agree it is difficult to find “uniquely human” behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don’t rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don’t really satisfy what I think makes sense to call consciousness. I’m thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there’s conscious elements there.
IOW, I’m not a “scale is all you need” person—I don’t think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that’s just sort of another way of saying it isn’t. :-) The sort of “self-talk” modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.