I am deeply impressed by the amount of ground this essay covers so thoughtfully. I have a few remarks. They pertain to Miller’s focal topic as well as avoiding massive popular backlash against general AI and limited, expert system AI, backlash that will make current resistance against science and human “expertise” in general look pretty innocuous. I close with a remark on alignment of AI with animal interests.
I offer everyone an a priori apology if this comment seems pathologically wordy.
I think that AI alignment with interests of the body are quite essential to achieve alignment with human minds; probably necessary but not sufficient. Cross-culturally, regardless of superficial and (anyway) dynamic differences in values amongst cultures, humans generally have a hard time being happy and content if they are concerned with bodily well-being. Sicknesses of all kinds lead to invasive thoughts and emotions amounting to existential dread for most people, even the likes of Warren Zevon.
The point is that know that I, and probably most people across cultures, would be delighted to have a human doctor or nurse walk into and exam or hospital room with a pleasent-looking robot that we (the patient) truly perceived , correctly so, to possess general diagnostic super-intelligence based on deep knowledge of the healthy functioning of every organ and physiological system in the human body. Personally, I’ve never had the experience that any doctor of mine, including renowned specialists, had much of a clue about any aspect of my biology. I’d also feel better right now if I knew there was an expert system that was going to be in charge of my palliative care, which I’ll probably need sooner rather than later, a system that would customize my care to minimize my physical pain and allow me to die consciously, without irresistible distraction from physical suffering. Get to work on that, please.
Such a diagnostic AI system, like a deeply respected human shamanic healer treating a devout in-group religious follower, would even be capable of generating a supernormal placebo effect (current Western medicine and associated health-insurance systems most often produces strong nocebo effects, Ugg.), which it seems clear would be based on nonconscious mental processes in the patient. (I think one of the important albeit secondary adaptive functions of religions is to produce supernormal placebo effects; I have a hypothesis about why placebo effects exist and why religious healers in spiritual alignment with their patients are especially good at evoking them, a topic for a future essay.) The existence of placebo effects, and their opposite, are good evidence that AI alignment with body is somewhat equivalent to alignment with mind.
Truly perceived is important. That is one reason I recommend that a relaxed and competent human health professional accompany the visiting AI system. Even though the AI itself may speak to the patient, it is important to have this super-expert system, perhaps limited in its ability to engage emotionally with the patient (like the Hugh Laurie character, “House”) be gazed upon with admiration and a bit of awe by the human partner during any interaction with the patient. The human then at least competently pretends to understand the diagnosis and promises the patient to promptly implement the recommended treatment. They can also help answer questions the patient may have throughout the encounter. An appropriate religious professional could also be added to the team, as needed, as long as they too show deep respect for the AI system.
I think a big part of my point is that when an AI consequentially aligns with our bodies, it thereby engenders a powerful “pre-reflective” intimacy with the person. This will help preempt reflective objections to the existence and activities of any AI system. And this will work cross-culturally, with practically everyone to ameliorate the alignment problem, at least as humans perceive it. It will promote AI adoption.
Stepping back a moment, as humans evolved the cognitive capacities to cooperate in large groups while preserving significant degrees of individual sovernity (e.g., unlike social insects) and then promptly began to co-evolve capacities to engage in the quintessentially human cross-cultural way of life I’ll call “complex contractual reciprocity” (CCR), a term better unpacked elsewhere, we also had to co-evolve a stong hunger for externally-sourced maximally authoritative moral systems, preferably ones perceived as “sacred.” (Enter a long history of natural selection for multiple cognitive traits favoring religiosity.) If its not from a sacred external source, but amounts to some person’s or subculture’s opinion, argument and instability, and the risk of chaos, is going to be on everyone’s minds. Durable, high degrees of moral alignment within groups (whose boundaries can, under competent leadership, adaptively expand and contract) facilitates maximally productive CCR, and that is almost synonymous with high, on average, individual lifetime inclusive fitness within groups.
AI expert systems, especially when accompanied by caring compassionate human partners, can be made to look like highly authoritative, externally-sourced fountains of sacred knowledge related to fundamental aspects of our well-being. Operationally here, sacred means minimally questionable. As humans we instinctively need the culturally-supplied contractual boilerplate, our group’s moral system (all about alignment), and other forms of knowledge intimately linked to our well-being to be minimally questionable. If a person feels like an AI system is BOTH morally aligned with them and their in-group, and can take care of their health practically like a god, then from the human standpoint, alignment doubts will be greatly ameliorated.
Finally, a side note, which I’ll keep brief. Having studied animals in nature and in the lab for decades, I’m convinced that they suffer. This includes invertebrates. However, I don’t think that even dogs reflect on their suffering. (Using meditative techniques, humans can get access to what it means to have a pre-reflective yet very real experience.) Anyway, for AI to every become aligned with animals, I think it’s going to require that the AI aligns with their whole bodies, not just their nervous systems or particular ganglia therein. Again, because with animals the AI is facing the challenge of ameliorating pre-reflective suffering. (I’d say most human suffering, because of the functional design of human consciousness, is on the reflective level.) So, by designing AI systems that can achieve alignment with humans in mind and body, I think we may simultaneously generate AI that is much more capable of tethering to the welfare of diverse animals.
I am deeply impressed by the amount of ground this essay covers so thoughtfully. I have a few remarks. They pertain to Miller’s focal topic as well as avoiding massive popular backlash against general AI and limited, expert system AI, backlash that will make current resistance against science and human “expertise” in general look pretty innocuous. I close with a remark on alignment of AI with animal interests.
I offer everyone an a priori apology if this comment seems pathologically wordy.
I think that AI alignment with interests of the body are quite essential to achieve alignment with human minds; probably necessary but not sufficient. Cross-culturally, regardless of superficial and (anyway) dynamic differences in values amongst cultures, humans generally have a hard time being happy and content if they are concerned with bodily well-being. Sicknesses of all kinds lead to invasive thoughts and emotions amounting to existential dread for most people, even the likes of Warren Zevon.
The point is that know that I, and probably most people across cultures, would be delighted to have a human doctor or nurse walk into and exam or hospital room with a pleasent-looking robot that we (the patient) truly perceived , correctly so, to possess general diagnostic super-intelligence based on deep knowledge of the healthy functioning of every organ and physiological system in the human body. Personally, I’ve never had the experience that any doctor of mine, including renowned specialists, had much of a clue about any aspect of my biology. I’d also feel better right now if I knew there was an expert system that was going to be in charge of my palliative care, which I’ll probably need sooner rather than later, a system that would customize my care to minimize my physical pain and allow me to die consciously, without irresistible distraction from physical suffering. Get to work on that, please.
Such a diagnostic AI system, like a deeply respected human shamanic healer treating a devout in-group religious follower, would even be capable of generating a supernormal placebo effect (current Western medicine and associated health-insurance systems most often produces strong nocebo effects, Ugg.), which it seems clear would be based on nonconscious mental processes in the patient. (I think one of the important albeit secondary adaptive functions of religions is to produce supernormal placebo effects; I have a hypothesis about why placebo effects exist and why religious healers in spiritual alignment with their patients are especially good at evoking them, a topic for a future essay.) The existence of placebo effects, and their opposite, are good evidence that AI alignment with body is somewhat equivalent to alignment with mind.
Truly perceived is important. That is one reason I recommend that a relaxed and competent human health professional accompany the visiting AI system. Even though the AI itself may speak to the patient, it is important to have this super-expert system, perhaps limited in its ability to engage emotionally with the patient (like the Hugh Laurie character, “House”) be gazed upon with admiration and a bit of awe by the human partner during any interaction with the patient. The human then at least competently pretends to understand the diagnosis and promises the patient to promptly implement the recommended treatment. They can also help answer questions the patient may have throughout the encounter. An appropriate religious professional could also be added to the team, as needed, as long as they too show deep respect for the AI system.
I think a big part of my point is that when an AI consequentially aligns with our bodies, it thereby engenders a powerful “pre-reflective” intimacy with the person. This will help preempt reflective objections to the existence and activities of any AI system. And this will work cross-culturally, with practically everyone to ameliorate the alignment problem, at least as humans perceive it. It will promote AI adoption.
Stepping back a moment, as humans evolved the cognitive capacities to cooperate in large groups while preserving significant degrees of individual sovernity (e.g., unlike social insects) and then promptly began to co-evolve capacities to engage in the quintessentially human cross-cultural way of life I’ll call “complex contractual reciprocity” (CCR), a term better unpacked elsewhere, we also had to co-evolve a stong hunger for externally-sourced maximally authoritative moral systems, preferably ones perceived as “sacred.” (Enter a long history of natural selection for multiple cognitive traits favoring religiosity.) If its not from a sacred external source, but amounts to some person’s or subculture’s opinion, argument and instability, and the risk of chaos, is going to be on everyone’s minds. Durable, high degrees of moral alignment within groups (whose boundaries can, under competent leadership, adaptively expand and contract) facilitates maximally productive CCR, and that is almost synonymous with high, on average, individual lifetime inclusive fitness within groups.
AI expert systems, especially when accompanied by caring compassionate human partners, can be made to look like highly authoritative, externally-sourced fountains of sacred knowledge related to fundamental aspects of our well-being. Operationally here, sacred means minimally questionable. As humans we instinctively need the culturally-supplied contractual boilerplate, our group’s moral system (all about alignment), and other forms of knowledge intimately linked to our well-being to be minimally questionable. If a person feels like an AI system is BOTH morally aligned with them and their in-group, and can take care of their health practically like a god, then from the human standpoint, alignment doubts will be greatly ameliorated.
Finally, a side note, which I’ll keep brief. Having studied animals in nature and in the lab for decades, I’m convinced that they suffer. This includes invertebrates. However, I don’t think that even dogs reflect on their suffering. (Using meditative techniques, humans can get access to what it means to have a pre-reflective yet very real experience.) Anyway, for AI to every become aligned with animals, I think it’s going to require that the AI aligns with their whole bodies, not just their nervous systems or particular ganglia therein. Again, because with animals the AI is facing the challenge of ameliorating pre-reflective suffering. (I’d say most human suffering, because of the functional design of human consciousness, is on the reflective level.) So, by designing AI systems that can achieve alignment with humans in mind and body, I think we may simultaneously generate AI that is much more capable of tethering to the welfare of diverse animals.
Best wishes to all, PJW