I updated a bit from this post to be more concerned about the AIs themselves, I think your depiction really evoked my empathy. I’d previously been just so concerned with human doom that I’d almost refused to consider it, but in the meantime I’ll definitely make an effort to be conscious of this sort of possibility.
For a fictional representation of my thinking (what your post reminded me of…), Ted Chiang has a short story about virtual beings that can be cloned and some were even potentially abused… “the lifecycle of software objects”
Yeah, and we already know humans can be extremely sadistic when nobody can catch them. I’ve emailed CLR about it just in case they aren’t already on it, because I don’t have time myself and I really want somebody to think about it.
In his recent podcast with Lex Fridman, Max Tegmark speculates that recurrent neural networks (RNNs) could be a source of consciousness (whereas the linear, feed-forward, architecture of the current dominant architecture of LLMs, isn’t). However, I’m not sure if this would help us or the AIs avoid doom, as the consciousnesses could have very negative valence (and so hate us for bringing them into being). And I think it’s very ethically fraught to experiment with trying to make digital consciousness.
I updated a bit from this post to be more concerned about the AIs themselves, I think your depiction really evoked my empathy. I’d previously been just so concerned with human doom that I’d almost refused to consider it, but in the meantime I’ll definitely make an effort to be conscious of this sort of possibility.
For a fictional representation of my thinking (what your post reminded me of…), Ted Chiang has a short story about virtual beings that can be cloned and some were even potentially abused… “the lifecycle of software objects”
Yeah, and we already know humans can be extremely sadistic when nobody can catch them. I’ve emailed CLR about it just in case they aren’t already on it, because I don’t have time myself and I really want somebody to think about it.
In his recent podcast with Lex Fridman, Max Tegmark speculates that recurrent neural networks (RNNs) could be a source of consciousness (whereas the linear, feed-forward, architecture of the current dominant architecture of LLMs, isn’t). However, I’m not sure if this would help us or the AIs avoid doom, as the consciousnesses could have very negative valence (and so hate us for bringing them into being). And I think it’s very ethically fraught to experiment with trying to make digital consciousness.