I came to the hotel as I was finishing a contract for Rethink Prioritites, worked for them there for one month, then did indepenent research. Now I am employed at an EA org again, and I am paying cost price.
I agree that sentience, at least as we’ve defined it, is an all-or-nothing phenomenon (which is a common view in philosophy but not as common in neuroscience).
What do you think of the argument that there may be cases where it’s unclear if the term is appropriate or not. So there would be a grey area where there is a “sort of” sentience. I’ve talked to some people who think that this grey area might be taxonomically large, including most invertebrates.
Yeah, I meant it to be synonymous with agent.
Do you mainly see these scenarios as likely because you don’t think there is likely to be many beings in future worlds or because you think that the beings that exist in those future worlds are unlikely to be conscious?I had some thoughts about the second case. I’ve done some research on consciousness, but I still feel quite lost when it comes to this type of question.It definitely seems like some machine minds could be conscious (we are basically in existence proof of that), but I don’t know how to think about if a specific architecture would be required. My intuition is that most intelligent architectures other than something like a lookup table would be conscious, but don’t think that intuitions based on anything substantial.By the way, there is a strange hard sci-fi horror novel called Blindsight that basically “argues” that the future belongs to nonconscious minds and this scenario is likely.
Thanks!I personally would disagree that variety of experience is morally relevant. Obviously, most people enjoy variety of their own experiences, but that’s already waded into the total hedonistic utilitarian equation because it makes us happier. So I don’t think that we need to add it as a separate thing that has intrinsic moral value. Looking at diversity can also be aesthetically pleasing for us, but that gets waited in to the equation because it makes us happy, and so, again, I don’t think we need to say it has intrinsic moral value. I don’t think our aesthetic appreciation of biodiversity is a very significant source of happiness, though, compared to the well-being of the much larger number of animals involved.I think what you said makes sense given that moral position. I haven’t heard a name for the position that diversity of experience is intrinsically morally significant, but I have a friend who I think argued for a similar position, and I’ll ask him.
Animal Ethics has written about this. Here are some of our relevant posts on the subject. Hopefully they are helpful.https://www.animal-Ethics.org/sentience-section/relevance-of-sentience/why-we-should-consider-sentient-beings-rather-than-ecosystems/ https://www.animal-ethics.org/sentience-section/relevance-of-sentience/why-we-should-consider-individuals-rather-than-species/ https://www.animal-ethics.org/give-moral-consideration-sentient-beings-rather-living-beings/
Imagine you heard about alien civilization that was pivoted towards colonizing the stars. But most of these aliens had almost no moral recognition and some of them were raised in inhumane conditions to be killed for trivial reasons for the other aliens. If I heard about this situation, I would be pretty concerned about what the aliens would do when they started colonizing the stars. I wouldn’t be rooting for them by trying to prevent existential risk instead of trying to improve their values. But of course, that’s a description of our society. There are some additional details about our society that make me more hopeful about it, but it seems quite weird to say that improving our values in this way wouldn’t be important.
Thanks for your comment! I read your article and left a comment on it here. I’ll try to think more about psychosomatics and add a section on it when I have time.
It seems to me like when most EA’s are talking about an expanding circle what we are talking about is either an expanding circle of moral concern towards 1) all sentient beings or 2) equal consideration of interests for all entities (with the background understanding that only sentient beings have interests).Given this definition of what it means to expand the moral circle, I don’t think Gwern’s talk of a narrowing moral circle is relevant. For the list of entities that Gwern has described us as having lost moral concern for, we did not lose moral concern for them for reasons having to do with their sentience. Even when these entities are plausibly sentient (such as with sacred animals) it seems like people’s moral concern for them is primarily based on other factors. Therefore they should not count as data points in the trend of how our moral circle is or is not expanding. Also, quite plausibly, a big reason why we have lost concern for these entities is because of an increasingly scientifically and metaphysically accurate view of the world that causes us to not regard these entities to be seen as special, to have interests, or even to exist at all.
Thank you! :)Thanks for mentioning C. elegans behavioural flexibility. I had meant to comment about that, but forgot to. That’s a great paper on the subject.I think people sometimes unfairly minimize the cognitive abilities of some invertebrates because it gives them cleaner and more straightforward answers about which organisms are conscious, according to their preferred theory.
You are very welcome! :)That passage is also one of my favourite parts of his answers, thanks for highlighting it.I’ll take a look at that David Pearce post, thanks for the link.Thanks for pointing at the typo, fixed it now.
Another way that frugality can improve productivity is that it can reduce the amount of time you spend buying, looking after, organizing, tidying, and thinking about physical possessions (because you probably have fewer of them). Of course, people who aren’t frugal don’t necessarily have more things, but they tend to have more of them.
Bravo!I’m particularly excited about the paper submissions and the increased academic expertise of your staff. That seems very important in getting this work taken more seriously.
Staying within the phylum, snails are consumed by humans in many cultures and have attracted some attention as an edge case of consciousness in philosophical circles. A representative from class Gastropoda would therefore be useful.
It looks like there is a small error here. Aplysia was included on the table and is from class Gastropoda.
Great article! I like the conceptual clarification that you do about what it means to say that a process is unconscious and how people use this term inconsistently in the literature. I’ve never seen that put so well and it’s important.I was wondering what you think of cases where a good idea ‘spontaneously’ occurs to someone while there thinking about something unrelated or while their mind is wandering. I only know anecdotes about this phenomenon, but I think it’s a widespread phenomenon that most people would experience something like this themselves.Some people have some of their best ideas in this way and it seems to satisfy both criteria for being an unconscious process. I am not sure if it’s directly related to any of the potential consciousness indicating features, but it seems like an example of very complex cognition being unconscious. Albeit it’s a bit murky how it occurs.
Thanks! Good thoughts!I’m also not sure if we know how expensive emotions are. In particular, even if some emotions are complicated, I’m not sure if the basic conscious experience of pain is complicated (at least the affective part of the experience, maybe not the sensory part). It subjectively seems like quite a simple feeling, but I don’t know much about this, and I’d like to learn more.
Shelley Adamo misunderstands first question in part c) of her answer. I didn’t mean to suggest that biology was required for consciousness, just that biological organisms might be more likely to have underlying homology with humans, which could mean that they might be conscious while similarly complex AI would not be.I think that our best theories of consciousness suggest that at some point AI will be conscious.An issue with a written interview like this is that you can’t make clarifications on the fly to head off misunderstandings. I hope to improve on conducting these interviews in the future.