Do you mainly see these scenarios as likely because you don’t think there is likely to be many beings in future worlds or because you think that the beings that exist in those future worlds are unlikely to be conscious?
I had some thoughts about the second case. I’ve done some research on consciousness, but I still feel quite lost when it comes to this type of question.
It definitely seems like some machine minds could be conscious (we are basically in existence proof of that), but I don’t know how to think about if a specific architecture would be required. My intuition is that most intelligent architectures other than something like a lookup table would be conscious, but don’t think that intuitions based on anything substantial.
By the way, there is a strange hard sci-fi horror novel called Blindsight that basically “argues” that the future belongs to nonconscious minds and this scenario is likely.
I’m not sure I understand the first question. I don’t really know what a “non-conscious being” would be. Is it synonymous with an agent?
My impression is that feeling lost is a very common response to consciousness issues, which is why it seems to me like it’s not that unlikely we get it wrong and either (a) fill the universe with complex but non-conscious matter, or (b) fill it with complex conscious matter that is profoundly unlike us, in such a way that high levels of positive utility are not achieved.
The main response I can imagine for this at this time is something like “don’t worry, if we solve AI alignment our AIs will solve this question for us, and if we don’t things are likely to go much more obviously wrong”. But this seems unsatisfactory here for some reason, and I’d like to see the argument sketched out more fully.
Do you mainly see these scenarios as likely because you don’t think there is likely to be many beings in future worlds or because you think that the beings that exist in those future worlds are unlikely to be conscious?
I had some thoughts about the second case. I’ve done some research on consciousness, but I still feel quite lost when it comes to this type of question.
It definitely seems like some machine minds could be conscious (we are basically in existence proof of that), but I don’t know how to think about if a specific architecture would be required. My intuition is that most intelligent architectures other than something like a lookup table would be conscious, but don’t think that intuitions based on anything substantial.
By the way, there is a strange hard sci-fi horror novel called Blindsight that basically “argues” that the future belongs to nonconscious minds and this scenario is likely.
I’m not sure I understand the first question. I don’t really know what a “non-conscious being” would be. Is it synonymous with an agent?
My impression is that feeling lost is a very common response to consciousness issues, which is why it seems to me like it’s not that unlikely we get it wrong and either (a) fill the universe with complex but non-conscious matter, or (b) fill it with complex conscious matter that is profoundly unlike us, in such a way that high levels of positive utility are not achieved.
The main response I can imagine for this at this time is something like “don’t worry, if we solve AI alignment our AIs will solve this question for us, and if we don’t things are likely to go much more obviously wrong”. But this seems unsatisfactory here for some reason, and I’d like to see the argument sketched out more fully.
Yeah, I meant it to be synonymous with agent.