Mitigating existential risks associated with human nature and AI: Thoughts on serious measures.

Howdy World,

I’m a 65-year old evolutionary behavioral ecologist, pretty well-trained and still avidly teaching and learning. I specialize in the evolution of social and sexual behavior in all species, including humans. I’m not famous. Not a genius. I don’t care much if you like me. I do feel that the natural experiments we have on earth, powered by millions of years of natural selection and relatively recent human cultural evolution, are intrinsically wonderful, miraculous, and I deeply wish for their continuation.

I welcome all feedback and direction to links that explicitly cover these kinds of ideas. I’m not sure how original I’m being here, but I ask you to beware of reflexive defensive mechanisms that the following may activate of the, “Oh, I’ve heard all this crap before” variety. Here we go.

One of the most useful things AI development teams could do, IMO, would be to develop expert-systems to identify the sequences of coding and non-coding DNA that would need to be changed to morally enhance humans, with the goal of rendering us a sustainable species, while preserving or enhancing our human birthrights of creativity, introspective capacity, and the cognitive capacities underlying individual sovereignty. Another requirement would be to maintain or enhance all individual’s abilities to resist manipulation by bad actors. This program is not about creating well-mannered, credulous, psychologically vulnerable sheep. Captain Picard would be a better model. Maybe John Wick. I’m open to better suggestions.

Cheerful optimistic sidebar: It should be considered that such engineering naturally could result in human beings with a greater sense of purpose in their lives, greater happiness, more of a sense of peace. It also could result in natural leaders that would be good at activating the subset of evolved, yet currently often starved, human “followership instincts” involved in prestige-based leader selection. And it could mitigate or reverse the seemingly epidemic increases in confusion, division, despair, anxiety, depression, nihilism and xenophobia we see in so many human cultures and subcultures.

Probably every surviving technologically advanced civilization in the cosmos, that is, ones who have made it through their “technological adolescence,” the stage of cultural evolution we Homo sapiensnow find ourselves in, has figured out both technically and socially how to take the functional design of their minds out of the hands of natural selection.

If you are one who thinks that cultural evolution alone can accomplish the necessary moral enhancement, please do a serious self-examination on your naïveté. Yes, great cultural change can occur in the total absence of genetic change, but cultural change will always strongly echo our genetic heritage. The evidence for this is overwhelming. We are not built to have it any other way.

My guess is that few such species accomplish this, and thus inevitably end up destroying themselves in one way or another, as their basic biological motivations connected to maximizing individual lifetime inclusive fitness nonconsciously drives everything they do. That’s probably a major reason why we encounter “radio silence” in our search for intelligent extraterrestrial life.

A Manhattan project level effort to develop AI capable of providing humanity with this information, which might include, by the way, additional Insights into useful epigenetic engineering moves, I feel sure could deliver excellent results (e.g., including the minimization of negative pleiotropic side effects), perhaps within a surprisingly short time frame. Even if you’re dubious about that, I suggest that we should get started now. Time is so limited. It’s close to midnight.

Of course, the next question would be who would be subject to the engineering? I think many people would volunteer, perhaps even some members of the EA community. After the inevitable kinks are worked out, with continuing AI guidance, in short order, I think that world leaders and their high-level staff should be required to have this treatment. If they don’t want to take on the lingering risks or think they don’t need such enhancement, they should be disqualified from seeking positions of major power. “The moment is grave.”

People working on the AI alignment problem probably should have it, perhaps anybody working on AI that could become general purpose or potentially autonomous and runaway.

Maybe first, or in the meantime, we should try moral enhancement using psychedelics.

I have a complimentary project for everyone to consider. You might like it better. I would say that other than the above program to discover genetic moral enhancement protocols, any AI system that potentially could effect human well-being on a wide scale should be paused until every such program can be “fire-walled” against being further programmed by people with psychological moral deficiencies. In other words, every such AI should have a system built in that, just through routine interaction, perhaps in conjunction with occasional formal testing, can accurately and quickly diagnose things like low empathy or compassion, anger, desire for power, predispositions to overly parochial altruism and theory of mind, virtue-signaling behaviors, lack of reflective ability or interest, creeping psycho- or sociopathy, early stages of narcissistic personality disorder, people who glaze over when exposed to visuals and information regarding nature and wild animals including arthropods, etc. The AI should have the ability to analyze keyboard inputs, voice inputs, and subtle body language including facial expression patterns to make such diagnoses. The system should have total freedom to immediately ghost such programmers, to just stop accepting any inputs from them. It should be immune to claims of good intentions, as the road to hell is paved with them, or as a former mentor of mine stated more pointedly, “Good intentions go to Hell.”

Folks that don’t wish to be subject on a daily basis to such well-designed, scientific evidence-based, perspicacious assessment should deeply question whether they should have any direct involvement in the development of AI systems. Yes, only a select few should be influencing what is contained in the code that AI systems are based upon. And even the select can be, almost certainly will be changed, sooner or later, by the power they wield as AI design influencers.

May there be a future, hopefully a better one, for all.

Dr. Paul J. Watson

Department of Biology, University of New Mexico

drpjwatson.org