Thanks for the comment. It’s nice to see someone getting something out of it.
On the topic of life having goals, it’s not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region)may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly. Moreover, the goals and aims of human and other life already seem to follow this tendency. And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin—presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.
The key here is the inherent transcendence of what we’re already doing, where in the long run, it doesn’t matter what we may think or feel at a given point along the journey of evolution—we’re already and inescapably serving that tendency. If I or anyone else hadn’t said this, someone or something else likely would have. It doesn’t belong to me or anyone else, though I don’t mean to suggest I have it described accurately. I see myself here as mere observer, though perhaps nothing at all.
On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation. Naturally I don’t expect everyone to share this belief about AI. And of course it’s just an assumption based on one mind’s current limited reason and experience. The philosophical basis for the non-duality of qualia is curious, but I’ll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.
On the topic of the prospect of “anti-human” tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted. Indeed this idea seems fatalist. But that doesn’t necessarily make it false. Realistically, humans may be less likely to be determined “unwanted” as “made-to-order” for a particular purpose—a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions. Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.
Thanks for the comment. It’s nice to see someone getting something out of it.
On the topic of life having goals, it’s not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region) may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly. Moreover, the goals and aims of human and other life already seem to follow this tendency. And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin—presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.
The key here is the inherent transcendence of what we’re already doing, where in the long run, it doesn’t matter what we may think or feel at a given point along the journey of evolution—we’re already and inescapably serving that tendency. If I or anyone else hadn’t said this, someone or something else likely would have. It doesn’t belong to me or anyone else, though I don’t mean to suggest I have it described accurately. I see myself here as mere observer, though perhaps nothing at all.
On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation. Naturally I don’t expect everyone to share this belief about AI. And of course it’s just an assumption based on one mind’s current limited reason and experience. The philosophical basis for the non-duality of qualia is curious, but I’ll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.
On the topic of the prospect of “anti-human” tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted. Indeed this idea seems fatalist. But that doesn’t necessarily make it false. Realistically, humans may be less likely to be determined “unwanted” as “made-to-order” for a particular purpose—a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions. Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.