First, this article is very well articulated, which is quite the feat considering the wide array of topics it covers. Bravo, FalseCogs. And the perspective of increasing entropy as the end-goal of life, instead of a detriment to it, is original.
I will start with what I agree with in this article. First, organizing society into organ-agent-system levels seems like a good start in organizing a myriad of phenomena that make up life. I agree that each of these levels serve each other, because if they did not, the higher levels would cease to exist. (Examples of this would be heart failure for the organ-agent level or the behaviors of solitary animals, like for the agent-system level. An example for the latter is the jaguar, whom is only social to mate. Because of this, they lack the survival benefits of being in a social group.)
Now, I have a few questions. Please correct me if these questions misunderstand your article
Why do you attribute “life” to having a goal? Wouldn’t that be a personification of life, which I think is wrong since science has show us the indifference of matter to everything other than the laws of nature?
What evidence do you have of strong AI being able to exist?
Given that the strong AI exists, what if the strong narrow AI determines that life does not want humanity to exist? Should that be the goal of the strong, general AI?
Thanks for the comment. It’s nice to see someone getting something out of it.
On the topic of life having goals, it’s not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region)may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly. Moreover, the goals and aims of human and other life already seem to follow this tendency. And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin—presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.
The key here is the inherent transcendence of what we’re already doing, where in the long run, it doesn’t matter what we may think or feel at a given point along the journey of evolution—we’re already and inescapably serving that tendency. If I or anyone else hadn’t said this, someone or something else likely would have. It doesn’t belong to me or anyone else, though I don’t mean to suggest I have it described accurately. I see myself here as mere observer, though perhaps nothing at all.
On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation. Naturally I don’t expect everyone to share this belief about AI. And of course it’s just an assumption based on one mind’s current limited reason and experience. The philosophical basis for the non-duality of qualia is curious, but I’ll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.
On the topic of the prospect of “anti-human” tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted. Indeed this idea seems fatalist. But that doesn’t necessarily make it false. Realistically, humans may be less likely to be determined “unwanted” as “made-to-order” for a particular purpose—a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions. Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.
First, this article is very well articulated, which is quite the feat considering the wide array of topics it covers. Bravo, FalseCogs. And the perspective of increasing entropy as the end-goal of life, instead of a detriment to it, is original.
I will start with what I agree with in this article. First, organizing society into organ-agent-system levels seems like a good start in organizing a myriad of phenomena that make up life. I agree that each of these levels serve each other, because if they did not, the higher levels would cease to exist. (Examples of this would be heart failure for the organ-agent level or the behaviors of solitary animals, like for the agent-system level. An example for the latter is the jaguar, whom is only social to mate. Because of this, they lack the survival benefits of being in a social group.)
Now, I have a few questions. Please correct me if these questions misunderstand your article
Why do you attribute “life” to having a goal? Wouldn’t that be a personification of life, which I think is wrong since science has show us the indifference of matter to everything other than the laws of nature?
What evidence do you have of strong AI being able to exist?
Given that the strong AI exists, what if the strong narrow AI determines that life does not want humanity to exist? Should that be the goal of the strong, general AI?
Thanks for the comment. It’s nice to see someone getting something out of it.
On the topic of life having goals, it’s not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region) may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly. Moreover, the goals and aims of human and other life already seem to follow this tendency. And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin—presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.
The key here is the inherent transcendence of what we’re already doing, where in the long run, it doesn’t matter what we may think or feel at a given point along the journey of evolution—we’re already and inescapably serving that tendency. If I or anyone else hadn’t said this, someone or something else likely would have. It doesn’t belong to me or anyone else, though I don’t mean to suggest I have it described accurately. I see myself here as mere observer, though perhaps nothing at all.
On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation. Naturally I don’t expect everyone to share this belief about AI. And of course it’s just an assumption based on one mind’s current limited reason and experience. The philosophical basis for the non-duality of qualia is curious, but I’ll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.
On the topic of the prospect of “anti-human” tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted. Indeed this idea seems fatalist. But that doesn’t necessarily make it false. Realistically, humans may be less likely to be determined “unwanted” as “made-to-order” for a particular purpose—a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions. Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.