Origin and alignment of goals, meaning, and morality
This paper seeks to provide an oversight perspective into the nature of agency, the origin of intrinsic drives (aka. core instincts), and the morality of interaction between agents, their goals, and their societies. Unlike more basic descriptions found commonly elsewhere, this paper places stronger emphasis on the existential aspect of agency, goals, and goal alignment. Specifically, consciousness and self-transcendence are explored—less from a spiritual standpoint, and more through a psychologic and sociologic lens. Finally, findings and insights are compared against popular proposals for the alignment of strong artificial intelligence. For the main findings and conclusions, see the “Summary” section.
Intended audience: ethicists; sociologists; psychologists; existentialists; transcendentalists; AI researchers and enthusiasts.
While the text addresses several philosophic topics, no special prerequisites are assumed. Basic familiarity with biology, agency, and causality is recommended. This text assumes the truth of evolution by natural selection, and it assumes matter and energy to follow consistent patterns, particularly at macro scales. Room is made for strong emergence, but mostly the discussion revolves around weak emergence. Prior understanding of emergence is optional. Within this text, the words “goal”, “imperative”, and “will” may be used in the sense of both teleologic, or planned and volitional, as well as teleonomic, or tending and compelled. Familiarity with these terms is optional. Clarification is given where one meaning is preferred over the other. Again, no special background is assumed for the main message.
Agency
An agent is someone or something that has thought, movement, or tendency toward some aim or goal. Often the individual, or agent—such as a person—is viewed as an atom, or concrete, indivisible thing. And collections of interacting agents comprise systems, such as hives, colonies, and societies. But further perspectives can offer valuable insight. The agent, for example, may be viewed as itself a system of organs, or internal regions or subprocesses delegated with specific goals or aims. Moreover, each member of a group of agents may be viewed as an organ of a higher being, or higher agent. Within society, for example, each person may be assigned roles and imperatives, whether intrinsically (i.e. innately) or extrinsically (i.e. socially). And these roles and imperatives may then “give back” and support higher entities, such as families, organisations, colonies, and ecosystems.
Any such system, agent, or organ, at any level, may have tendencies and goals. And these goals may or may not align. We might say that agency, thus, is organised naturally into a hierarchy of emergent levels, or integrative levels. Let us call this arrangement the system-agent-organ hierarchy. Note that from each point along the hierarchy, there are two neighbouring directions: smaller, or toward the organs inside; and larger, or toward the system outside. Harmony between the goals of the agent and its organs may be termed inner goal alignment. And harmony between the agent and its parent system may be termed outer goal alignment. Both aspects of alignment represent health on some level. Specifically, inner goal alignment supports individual health while outer goal alignment supports systemic health. The latter is often called morality, as the core function of morality is keeping balance, harmony, and function—aka. goal alignment—inter-agently, or within the system of agents. Hence, the health of a system depends on the morality of its members. And perhaps amusingly, the health of an individual depends on the morality of its organs.
Transcendence
Metaphorically, transcendence is about changing perspective, or scope of consideration, away from being distracted with the trees, toward seeing the greater forest. In other words, transcendence is about stepping back to see the bigger picture. In more technical terms, transcendence is about moving toward more fundamental, permanent, or antecedent phenomena. For example, when distinguishing between a system’s current state and its fundamental laws or behaviour, we might say that system behaviour transcends system state.
Within this text, transcendence is meant primarily as stepping away from instrumental goals or subprocesses, toward more intrinsic drives or tendencies. That is, we are stepping away from the means to an end, toward the end in itself. In the system-agent-organ hierarchy, this would mean moving upward or leftward—from organ to agent; from agent to system; or even from organ to system. To use an analogy, consider a finger, whose movements transcend to the hand, further to the arm, the torso, perhaps the brain, and then abstractly to the combination of evolved traits, prior learning, and current circumstance. In this way, transcendence is about tracing to more permanent, more fundamental, or at least more antecedent aspects of life and purpose. Such considerations are sometimes referred to as seeking “higher meaning”, where “meaning” refers to “what moves us”, or fundamentally why we behave as we do. Thus, transcendence is about tracing the higher or highest why of a system’s behaviour.
Direction of moral decree
Within the system-agent-organ hierarchy, one might ask which direction is serving the other. Do the organs serve the agent, or does the agent serve the organs? We might consider this question from various places along the hierarchy. For example, humans, like most animals, derive benefit from their surrounding ecosystem while simultaneously giving back in the form of carbon dioxide, intellect, and plastic. On a more stellar level, humans may eventually help spread the seeds of “lower” lifeforms to other celestial bodies. Or perhaps it will be humans’ silicon successors. In any case, symbiosis appears to exist across the system-agent-organ spectrum. “Lower” entities support higher entities in base function while higher entities support lower ones in organising and planning. If we go small, for example, enzymes enable cellular metabolism and base function while cells produce and manage enzymes. Yet even smaller, it would seem perhaps that subatomic waves enable subatomic particles while particles give order to waves. The same general pattern appears to exist throughout the hierarchy. Specifically, it seems that each direction serves the other.
Let us consider a more conventional example—babies. Do the parents serve the child, or does the child serve the parents? On a surface level, it would seem each serves emotional or physiological needs of the other. Yet the overarching biological imperative, if we trace it back, seems to serve the will of nature to find, extract, and release potential energy. This tends to occur through the evolution of replicators, or individual but diverse energy extraction machines. Some might even say the core aim of life is to increase systemic entropy. Does this mean the moral decree comes ultimately from below? Let us see.
Say we are human and have a full cake. We can eat it all at once, or we can extend its consumption through time. Our lower nature is to gobble it up quickly. The cake, after all, serves a core and immediate need of ours for energy. Yet a “more evolved”, higher tendency within our nature is to make the best of resources longer-term—at least when circumstances allow. In this example, a higher tendency acts as something of a controller or manager for lower systems. Yet those lower systems, including their imperative motivation, fundamentally enable their higher management systems. Does the person live to eat, or eat to live?
Say we are an animal. We produce, manage, and are made of cells. We are higher order, but we depend on our cells. We serve their needs within our capacity, as do they for us. Occasionally, per our higher wisdom, we may sacrifice perfectly good cells in the name of higher goals and longer-term needs. Our cells are like little machines, each with a set of chemically and genetically assigned roles and imperatives. They are smart at what they do, but their focus is very immediate, very local. They are sharp but shallow. And that is where we come in, to extend their effective wisdom with higher organisation and planning. We serve our cells by giving them access to hard-to-reach nutrients and other amenities. Without our higher organisation and planning, most of our cells would quickly perish. Yet without our cells, there would be very little capacity for planning or reaching of distant resources.
Say we are an enzyme. Our nature is to catalyse molecular reactions. We love finding suitable partners and getting busy. Like a programming loop, we keep cycling while the conditions allow. We are a subprocess, or “embodied theory”, of our environment. Our structurally-encoded wisdom allows molecules, and special pairs of molecules, to expend potential energy, possibly toward constructive ends. We are likely, at least when traced back far enough, the child of entropy and chaos. Not surprisingly, it would seem we continue both to serve, and to rely upon, entropy and chaos. Yet despite our “lower” or “adjacent” relationships and interactions, we often arise in the midst of higher management. Cells and tissues may produce, release, and toggle our being and function. They often do this per higher order to enable our continued function and proliferation. In a way, it seems everyone is working for and by our good friends, entropy and chaos.
On a quick aside, in the following paragraphs, by “reducible”, we mean that certain higher-level, more complex, or more intelligent behaviours can be logically explained by the cumulative effect and or interaction of lower, simpler behaviours. This type of relationship, where simpler, more concrete properties and behaviours logically build up to enable and support more complex or abstract properties and behaviours is called weak emergence. In contrast, strong emergence refers to the irreducible appearance of properties and behaviours, or those which do not follow logically from simpler ones.
In the above three examples, it would seem there is something of a trade-off, or mix-up, between levels of the system-agent-organ hierarchy and whose goals are being served. There appears to be a tendency toward reducible, or bottom-up, origin of goals and imperatives. Yet it may be difficult in practice to rule out strong emergence, or the irreducible appearance of “new” (as in previously unobserved) properties and tendencies within a system—especially at higher levels. As an analogy, we might view strongly emergent features as hard-coded exceptions written into a simulation, where for example certain complex patterns of matter are anointed with new, irreducible functionality. But does this hypothetical possibility even matter?
If we go back to the basics, we see that matter appears to behave in fairly predictable ways. Sure, those ways may include quantum indeterminacy. But nevertheless, it seems everything follows predictable patterns, even if those patterns are on some levels peculiar. Back on the question of nature and the origins of goals and imperatives, any observed tendency, or “desire”, of sets of matter in particular configurations logically comes back to the origin of those “laws of nature”, or underlying physics, behind those tendencies or behaviours in question. And here is the rub: whether a feature or tendency can be traced back reducibly, or whether it appears only in special patterns of higher configuration, that feature still ultimately serves the transcendent origin of any and all features and tendencies. This sourcing of underlying imperative, or basis for any moral “ought”, would thus follow back equally for all three of (1) core observable physics, (2) weakly emergent properties, and (3) strongly emergent properties. All would serve that core, seemingly untouchable, transcendent essence of shared origin and imperative. Hence, it would seem that neither “end” of the system-agent-organ spectrum fundamentally controls the other. Rather, that invisible, merely inferred noumenon (or “thing-in-itself”) of highest transcendence would seem the thing ultimately being served. Any other description would be incomplete, requiring truncation of the causal chain.
Burning deeper, farther, perhaps longer
Despite the apparent logical unity toward that set of natural tendencies of the universe—in serving that untouchable transcendent origination of configuration and will—from a practical vantage, the apparent will of the two ends of the system-agent-organ spectrum can seem to possess distinct or even misaligned goals and imperatives. Smaller, shallower structures of energy extraction and reaction tend, absent of higher control systems, to burn fast and carelessly. An electric battery in short-circuit may expend its energy the fastest. But connected to a vehicle, it may live long enough to see another charge. Within evolutionary unfoldment across the universe, there appears to be a pervasive tendency toward higher precision and prudence, entailing deeper, farther, often longer energy extraction and release pathways. The specific nature of this tendency depends on the integrative level of consideration. Let us behold four levels—micro, macro, mental, and moral.
On the micro level, energy may be sought by means ranging from shallow through deep. Biological organisms often employ increasingly complex enzymes to break down and utilise increasingly complex substrates. Relatively simple mutations can explain how new opportunities for energy extraction may arise. Each new variant is like a new random key, only to find its natural substrate counterpart to unlock potential energy.
On the macro level, evolved complexity tends to enable more distal release. Increasingly advanced motility supports going the distance to find new energy reserves. In terms of emergence, in addition to adaptation through natural selection, there exists a phenomenon known as ecological succession. This is when the ecological effects of earlier, often more rugged colonisers eventually transform the environment into something more habitable for more complex, often more fragile lifeforms. Sometimes the latter arrived organism feeds off the biproducts of the former—or even feeds on the former directly. One might even say that humans are producing the biproducts and environment needed by current and future artificial intelligence systems. Understandably, machines of metal and synthetic materials can likely reach energy reserves effectively off-limits to mere mortals—through both higher intelligence and higher resilience.
On the mental level, higher development tends to bring higher capacity for abstraction. This allows moving beyond the confines of singlehood, into social-symbolic interaction. Language, roles, and other abstractions then support higher levels of organisation, which ultimately enable longer lifespans, greater planning, and better tools for finding, extracting, and releasing potential energy.
On the moral level, greater awareness and experience bring new interpersonal and intramental inference for withstanding harmoniously more elusive moral dilemmas. The result, through successive bouts of moral development, is the expansion of the intrapersonal circle. This means not just more people and other lifeforms, but also more of one’s own person through both time and potentiality can be arranged and accounted for fruitfully. The effect is having greater distances, timespans, and potentialities of peace and cooperation, with greater overall systemic development and hence energy extraction.
From observing these four levels—micro, macro, mental, and moral—it would thus seem the natural evolutionary tendency of life is toward greater complexity, exploration, abstraction, and harmony.
Some readers may be familiar with a field of study having similar sounding proposed findings of large-scale tendency: orthogenesis, or “progressive evolution”. Orthogenesis is the name given to a collection of mostly old theories about evolution, many of which at odds with, or opposed to, Darwinian evolution by adaptation through natural selection. Many of these theories make proposals quite teleological, as if certain ideal forms or ends are sought—perhaps even by intelligent design. Before we proceed, let us take a moment to make a distinction. These theories focus primarily on the organism’s form and function. In contrast, this text, with the above four key tendencies, focuses primarily on goal origination and alignment. Moreover, this text is concerned not with ideal forms or predestined ends, but rather with drives, goals, and morality. Nevertheless, for those interested, one noteworthy proposal is from D.W. McShea (1998), which preliminarily identifies the following eight trends:
[increased] entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, and complexity.
Consciousness and transcendence
For probably most grown humans, within the subjective experience there may exist senses of wanting, thinking, choosing, and doing. Depending on various personal and situational factors, the person may be so immersed in the passions of life that little if any reflection is given to the origin or nature of one’s desires and aversions. Especially when anxious or frustrated, the mind has a tendency to “zoom in” on the feelings and thoughts of attraction and repulsion—constant judgements, along with perhaps the thought and sensation of judgement from others. This onslaught of emotion can easily obscure the hiddenness of the source of that emotion, or drive, to and from all those thoughts and perceptions. If we think about it, does having a subjective experience really stop humans from following their evolved nature, which like all other lifeforms, seeks within its power to find, extract, and release more energy? Does the complexity of the personal plan, or the abstraction of language, change the core, common will which all serve?
One of the key functions of consciousness is social awareness and interaction, particularly language and self-consciousness, such as planning what to say, or following social norms and expectations. If we think about it, is worrying about our impression on, or judgement from, others not among the most frequent applications of conscious veto, or the ability to suppress urges and other behaviours? With this in mind, what function, in turn, does socialising serve in the bigger picture? It seems perhaps that its core role is to form coalitions, or aggregates, of individuals into groups. That is, socialising, to which consciousness seems intimately tied, appears to exist to make bigger beings, or superorganisms—hives which more effectively extract energy, in greater precision and from vaster regions of substrate.
Self-transcendence is about seeing and embracing the unchosen and impersonal nature and origin of will and worry—letting go of the earlier conceived notion of serving oneself alone. After all, if one’s fundamental imperative is not only of external origin, but apparently also natively toward higher aggregation through the evolved feature of consciousness, then is there really any other option? One way or another, the human serves something greater. Sure, denial is possible, and indeed quite common, but is that denial anything other than a mix of confusion and the instinct of trying to prove something?
Primitivism and progressivism
The underlying universal values of enhanced energy extraction complexity, exploration, abstraction, and harmony are not universally embraced culturally. This incongruence is understandable, however, if we consider the nature of the pieces, how they themselves develop, and the limitations of existing industrialist and progressivist frameworks.
In perhaps the broadest sense, primitivism is about rejecting and reversing the cumulative hierarchical constructing of society and or technology. Effectively, primitivism seeks to avoid complex or hierarchical division of labour by keeping individuals and groups “self-sufficient”, “all-in-one”, or “jacks of all trades”. This flat layout is not only about skills, but also tools and amenities. A core aim of primitivism is avoiding individual dependence on large or impersonal systems, which are believed to create alienation and lack of freedom. The primary values of primitivism seem to be independence, personal liberty, and free spirit.
In contrast, progressivism is about essentially open-ended advancement in the accumulation of empirical knowledge, building of technology, and development of social systems. In modern times, progressivism has placed particular emphasis on solving problems of economic inequality, corporate monopoly, worker conditions, and discrimination. Global cooperation, including toward managing resources, pathogens, and climate, has been another significant focus of progressivism. Further concerns have included human rights, animal welfare, and emotional health. Overall, the core values of progressivism seem to be science, technology, fairness, and health.
At first glance, the primitivist values of independence, personal liberty, and free spirit may seem quite reasonable. Yet if we step back and see how this plays out in real life, what becomes blatantly apparent is that sometimes it is not the presence of certain values, but rather the absence of others that brings trouble. Let us see some examples.
Misplaced aggression—During the industrial revolution, many traditional textile workers found themselves replaced by factory machines. To avenge their lost status and livelihood, some of these workers, called Luddites, took to attacking the machines. In this case, not only did the very liberty the workers likely valued enable the development and deployment of their machine replacements, but that same liberty likely justified the attack on the machines. This type of dog-eat-dog, state-of-nature, misplaced aggression seems natural in the absence of self-reflection and higher organisation. That is, when each person or group sees itself as inherently separate from the others, it fails to find and secure harmony and alignment for what ultimately already is, and has always been, a shared system with entities and wills of shared origin.
Blind heuristics—A well known psychological phenomenon is that people operate by heuristics, or rules-of-thumb, as gathered through trial-and-error experience. Similarly well known is that these heuristics develop consecutively from shallow and near to deep and far. That is, a child’s understanding of causation will be mostly local and simple, sometimes even magical. Throughout life, as new experiences and education are encountered, a person’s heuristics will slowly be amended or replaced, usually with more complex, more abstract variants. This process of cognitive and indeed moral development can be stressful and aversive. For this reason, many individuals shun the idea of updating their worldview. Usually there has to be a clear dilemma in keeping the old way, for new ways and views to be embraced. With all this in mind, primitivism is arguably part of the default value set for early reasoning. Higher reasoning, as inherent to progressivism, is cognitively more demanding, and generally requires more experience and education.
Imagined agency—One of the things about having free spirit is the freedom to infer agency wherever the imagination fancies. Humans are seemingly hard-wired to detect agency, or intelligent sourcehood, behind natural phenomena, including the weather. This tendency likely arises from a combination of being social animals and having evolved in complex, confusing environments like the jungle, where pretty much anything could have a trick up its sleeve. At first, this tendency may seem like no big deal. It presumably evolved for a reason. But if we look at actual history, perhaps including some current tribes, we see that unscientific fancies of the imagination have resulted in tribes and sometimes individuals not just inferring agency in the form of various gods, but further ascribing curious features and imperatives into those gods. The envisioned beings, for example, may have thirst for flesh of the living. Or perhaps certain traits or identities have been chosen as more or less worthy. Without science, the human mind tends to devise questionable beliefs. In this way, the personal liberty of primitivism, unatoned by refined reason, may entail disharmonious and rather delirious ways.
Compulsory eugenics—Lack of education and reflection, especially in the area of psychology, has further ill effects. The underlying aims behind human drives can be rather elusive. And the less education and self-reflection one has, the more likely one is to follow blind animal instincts, often while giving less-than-accurate rationalisations for the resulting feelings and behaviour. Humans, like other animals, have self-preservation instincts. Higher social animals, in particular, have their self-preservation split into physical and social-symbolic aspects. The latter may be described as ego drive, or the motivation behind maintaining one’s social image and status—one’s symbolic “self”. But what exactly is the ego’s core function? Why did it evolve? If we look at the role of social image and status, it seems their function is to sort individuals into a social hierarchy of better and worse. Specifically, the ego’s core function is to compete for social resources, most notably reproductive access. So ego is fundamentally about filtering genes. Back in the tribe, the criteria for comparison were probably things like health and physical prowess. But in modern times, the game of social hierarchy has become much more symbolic. Now, those who climb the ladder often do so through narcissistic games of deception and exploitation. This has shifted the compulsory game of eugenics toward selecting something morally questionable. To be fair, in a truly primitivist environment, with simple tribes, perhaps the game would tend back to selecting mostly physical traits. But to have a primitivist mindset in the modern environment is asking to remain blind to this shady and perverted carry-over from the animalistic past. Only with education and or wiser management can we expect to avoid blindly following dysfunctional urges.
As can be seen with the above set of examples, primitivism has an overarching tendency to lack the depth of reasoning, exploration of explanatory frameworks, and interdisciplinary or interlogic harmony needed for reliable individual insight and proper systemic function. What may look like a good idea on the surface becomes in reality quite oppressive through its lack of understanding and lack of organising. True, its participants may not know better, being lost in the trees; but we have the oversight perspective to ascend.
Authority and liberty
On the topic of top-down, authoritarian control structures, as are sometimes seen in human society, one mistake we must avoid is thinking the leaders of such a structure are somehow equal to the structure or body itself, from which we might infer emergent will, goal, or imperative. This possible confusion is similar to confusing the face, words, or identity of a person with the whole person. Specifically, what a person says—or even thinks—about themselves is not always true, accurate, or complete. This applies likewise to what others think or say about the person. The words or expressed will of a body is only one small part of the total expression of said body. And the true will or imperative is of the whole body. For those unaccustomed, this idea may seem counterintuitive since in convention society is built and comprised of an abstract symbolic framework, where individuals are not truly bodies, but rather are nodes of the abstract framework held within each member’s mind. On a related note, when we think of someone we know—including ourselves—we are really thinking the mental node that corresponds to the hypothetical person. No, we are not thinking of the node, but simply thinking the node. This is because thoughts, including of people, are merely representations appearing within consciousness.
Earlier we mentioned the general tendency of life toward greater abstraction and complexity, with an added dose of self-transcendence through the help of consciousness. Speaking of such things as aggregates, higher order, and superorganisms may evoke thoughts of authoritarian or totalitarian, top-down social structures. Such confusion, should it arise, is understandable. When we speak of systems being more carefully ordered, with perhaps more levels of abstraction, the idea of orderliness may be conflated with that of dictatorial control. The higher order “control” that life seems to seek, however, is rather different. On the surface, a human body with a brain “controlling” it may appear similar to a dictatorship with a leader controlling the people. But there are some very important differences:
The human mind experiences pain and often suffering when the body is hurt or misused. Unlike in a dictatorship, these signals strongly influence the mind’s current and future choices.
The brain and hence mind depend on the body’s wellbeing and continuation. Unlike in a dictatorship, there is no dictator who can take themselves or their family to another land if the current one fails.
The brain is directly and strongly compelled by chemical signals from the rest of the body, toward ends such as rest or sustenance. Unlike in a dictatorship, the very thoughts and desires that arise are inescapably tied to substances from the greater body.
The brain shares the same genome as the other organs of the body. Hence, unlike in a dictatorship, there is no competition for reproductive access—unless, of course, it really is a tumor.
As we can see from the above points, the human brain, unlike a human dictator, is more closely and automatically aligned with the will of the overall body. We might even say the brain serves the body, its organs, its cells, and even its genes. If we think about it, even the mind has a similar dynamic, where reason and thought serve emotion. Hence, the thing that speaks is a slave to, or even emergent property of, the unconscious forces that drive it. This is rather the opposite of a dictator in a dictatorship.
Back on the topic of higher order “control” and society, the type of organising and control it would seem life seeks is of the naturally emergent variety. This is fairly akin to collective intelligence, or gestalt consciousness, as might emerge from hives or flocks. Similar to how the human brain can be made of cells which self-organise and enable intelligence, there need not be any internal top-down control structure, nor any leader. Higher reason emerges from the whole—as its own, “new” entity. Still, though distinct, its behaviour naturally serves the same core will—even if that will is unknown to the individual members. As suggested earlier with respect to “direction of moral decree”, all levels automatically serve all others, as all matter automatically serves the global transcendent origination. So, unlike in a dictatorship, higher order organisation can occur while individual members retain their individual liberty—or at least what we may call “individual” liberty.
Addiction and narcissism
Despite the apparent unity of various system-agent-organ levels, along with adjacent members of a given level, in serving the same transcendent origination, the mental model, or epistemic subsystem, of a given entity is not always accurate or aligned with others. Often this is due simply to the chaotic, biased, trial-and-error way that world models and self models are formed. For practical purposes, this chaotic aspect of error and bias may be largely unavoidable, at least during an agent’s early learning. But there is another common cause of troublesome models, known as reward hacking. This is where the instrumental subsystem, or the part of an agent that tries to find ways to satisfy goals and imperatives, “accidentally” discovers that we can “achieve” the desirable end by simply modifying or perverting the world model or self model to indicate a better world state without actually arriving at a better world state. In other words, satisfying goals is replaced with painting an illusory model where the goal is already satisfied. From an engineering standpoint, we might say that reward hacking short-circuit’s the success signal.
Perhaps the most blatant version of short-circuiting the success signal is through substances or other addictions that stimulate directly, or nearly directly, the senses or bodily indicators for satisfaction. Naturally, without accurate signalling, the agent may fail to achieve its imperatives.
An often less obvious case of reward hacking is narcissism. Here, sometimes the general world model, but most certainly the self model, is corrupted by the agent’s insatiable desire to maintain or elevate social status. Instead of doing valuable things, and letting one’s own and others’ models update accordingly, the narcissist seeks to manipulate and deceive the model building faculties of self and other—to paint a false but agreeable picture of oneself and one’s objects of identification. Put simply, the narcissist tries to short-circuit everyone’s signals. Within the social environment, having members short-circuiting their and others’ signals can be expected to result in deleterious effects in systemic function and efficiency.
Addiction and narcissism thus appear to go against the universal tendency of life toward greater exploration and harmony.
Free will
From the perspective of the individual agent, the aim of life is to align environmental opportunity with personal potential. In the case of social agents, we might further distinguish between epistemic will, or believed desire, and actual will, or true tendency. This distinction relates to the earlier mentioned difference between identified leaders, or controller organs, and the actual emergent intelligence and behaviour of a system. That is, a mind or leader may express or declare certain epistemic will, but only the full body’s behaviour can show actual will.
Whether a given action is said to have been taken in free will depends on (1) the nature of reality, (2) the philosophical class of “free will” in question, and (3) the alignment of said action with either (a) hypothetical actual will absent undue influence, or (b) epistemic will. Often the motivation for classifying actions as free or unfree is done for ascription of moral responsibility, which is ultimately a social construct. Sometimes, the question is not about individual actions, but whether the agent in general possesses a particular class of free will. This latter question, in practice, seems more concerned with meaning, or purpose.
Fortunately, as discussed earlier about higher order “control”—as from higher emergent levels—individuals may essentially go on living by their same main intrinsic drives. The type of systemic progress that life seems to seek appears, in practice, to emerge mostly from the usual continued refinement of scientific knowledge, technology, social structure, and individual self-awareness.
We might say that throughout the superorganism that is life and substrate, there exist will channels, or hierarchies of will delegation. At the level of the agent, will may arrive through multiple channels. In the case of an individual human, for example:
Basic intrinsic drives such as self-preservation (both physical and social-symbolic, or egoic), reproduction, sustenance, and curiosity arrive through inherited genes and epigenetics. These are long-term stable and for most folks seemingly fully taken for granted.
Early childhood conditioning then comes along to configure certain aspects of the individual’s personality, such as attachment style, felt need to control others, and level of empathy. These are mostly stable, but they can slowly shift, depending on later long-term environment and other factors.
Culture and socialisation then begin to instill social norms, expected behaviours, and even expected beliefs. A curious phenomenon exists, related to automaticity, or the tendency of thoughts and behaviours to shift slowly into the unconscious, where they become second-nature. Specifically, beliefs, thoughts, and behaviours partaken enough times become automatic, unreflected, and low-resistance. This last part is key, as future decision and conception pathways will be guided by the level of mental resistance felt toward prospective ideas and actions. Hence, even “freely willed” decisions and beliefs would be largely influenced by culture and socialisation.
Curated opportunity is ushered in by the present society and environment of the agent, pre-choosing the availability and secondary consequences of prospective options—and indeed sometimes even prospective beliefs. “Will” in this form can be sneaky, as options and resources can be made artificially scarce, thus changing their perceived value and actual price. And just like for the previous three channels of will entry, the process of choosing often feels natural and “free”.
One of the common themes in the human experience of “free will” seems to be that the part of will that is installed by the environment is usually black-boxed, or taken for granted, while the remaining portion of the decision process—the instrumental pathway—is open for review. Thus, the role of human agency seems to be in consciously devising ways to satisfy the received imperative. Each person has been provided a set of intrinsic drives, childhood conditioning, socialisation, and curated opportunity for which a viable pathway is to be found. Fortunately, this experience can be enhanced through the progressive development of upbringing, socialisation, and opportunity.
Finding alignment
As far as human society goes, most of the changes needed to move forward and increase systemic harmony surround conventional technological, social, and individual psychological development. Higher order progress, after all, tends to emerge from lower order refinement. Still, there are some key areas that have not yet received universal uplifting. Many of these involve individual education and societal management of resources. Some nations are doing fairly well in these areas while others are far behind. Yet room exists for substantial progress in a great many places.
There is one matter, however, that seems particularly misaligned in maybe even most places today. This is the problem of higher level management positions, whether of business or government, being operated with lower level mindsets. Specifically, the type of “level” here meant is the level of system-agent-organ self-reflection and transcendental awareness. Consider the following. Say you have someone who has never taken the time to reflect back on their own upbringing, culture, and other influences to understand how and why they have become as they are, and not some alternative way. Say this person has never taken the time to meditate and self-reflect in any real depth on their own mind and decision process. The result would be a type of self-blindness, very much like that described above for the “black box” of will origin in the unreflected version of the “free will” experience. Here is the thing. Can a person who is blind to their own nature and the transcendental aspect of their will really make sound decisions toward configuring society for maximum will alignment?
Nature’s job placement system
When there exists potential energy waiting for release, all it takes is a little bit of chaos to find and replicate the right key for the job. Many organisms have evolved to exploit these opportunities—whether as primary aim, or as secondary subgoal. This much is fairly well known. But there are other, analogous versions of this pattern within nature, including within human nature. One case is that of epigenetics. Another is early childhood emotional conditioning.
In epigenetics, environmental factors, such as chemicals and hormones during gestation, leave long-lasting, often permanent changes to the way cells behave, interact, and arrange. One common example is stress hormones, which if elevated during certain periods of gestation can result in epigenetic changes that promote a lifetime of high anxiety in the offspring. This effect, once instilled in the child, cannot simply be willed away. A related example is gender. Such changes essentially assign a particular role or inclination before individual thought and deliberation—presumably selected by evolution to fill roles deemed as in-demand within the present environment.
In early childhood emotional conditioning, the behaviour of caregivers and peers configures the brain toward certain emotional needs and aversions. These changes, similar to for epigenetics, often last a lifetime. One common example is attachment style, where the caregiver’s showing, lacking, or withholding of affection and support sets the child’s long-term expectations and social interaction style. Future relationships and relationship roles will likely be chosen based on this conditioning. A common theme for early childhood emotional conditioning is that whatever instinctual emotional need is found significantly unmet will likely become a long-term goal or fantasy of the individual, often for life. For example, a child who felt behaviourally constricted, as from parental overcontrol, will likely develop the felt emotional need to control others—as a form of compensation for the ongoing subconscious frustration from a broken past which cannot be fixed. As another example, a child exposed to an environment lacking in stability and predictability may develop the lifelong felt need not only to seek stability, but also to create stability. Such emotional conditioning delegates roles and inclinations to the person, again to fill needs perceived as missing. And because these changes occur so early in development, their effect is usually taken for granted as simply who or what one is—often felt simply as inherently right, or self-evident, without the need to explain. Once again, the part of this will that is perhaps free is the instrumental path taken to satisfy the black-box of subconscious emotional need.
Human moral development
Nature’s job placement goes further, however—into the realm of meaning, morality, and identity. A fairly well studied yet not so widely known phenomenon is that of human ego development, along with its corollary, human moral development. These two bodies of research fundamentally investigate the same core phenomenon from two different perspectives. While the details vary depending on methodology, the common theme is that individual humans develop progressively through a sequence of relatively discrete stages of ego and moral reasoning. That is, the person’s worldview, or understanding of the relations between, and fundamental meaning of, self and other changes through bouts of disintegration, or the emotional breaking down of old views to rebuild new ones. And this process, though deeply personal, follows fairly consistent patterns across individuals and cultures. Moreover, examination of the stages, their traits, and their themes uncovers important trends and tendencies. Let us see two such theories, both based on empirical data.
Lawrence Kohlberg’s stages of moral development (1976-1981) investigated the types of moral reasoning employed by humans at various ages. The following stages were found:
K1 -- Obedience and punishment orientation—might makes right; purely consequentialist; personal goodness or badness based on outcome.
K2 -- Self-interest orientation—concern for others only in-so-far as it affects oneself in the short term; quid pro quo.
K3 -- Interpersonal accord and conformity—concern for one’s reputation and the wellbeing of peers and others within proximity.
K4 -- Authority and social-order maintaining orientation—following formal rules and partaking in structured social roles.
K5 -- Social contract orientation—recognising and respecting that each individual has unique and personally significant needs.
K6 -- Universal ethical principles—seeing that certain values and principles transcend and supersede formal rules and individual opinion.
K7 -- Transcendental morality—seeing that individual values and will derive ultimately from outside forces, such as evolution and physics.
The above descriptions provide only a brief gist of the stages. Note particularly that one does not arrive at a stage simply by reading about or understanding it intellectually. Rather, one must encounter through actual experience significant personal moral dilemmas which cannot be resolved via one’s existing worldview of right and wrong, good and bad. The process of transition between stages often accompanies periods of personal existential crisis, or meaning crisis, at least for entering stages K4.5 and higher. The transition to K4.5 and K5 in particular often accompanies or comprises the quarter-life or mid-life crisis. Moving to K6 is rare, usually requiring particularly troublesome moral dilemmas. K7 is similar but more complete, where the existing personal worldview is found fundamentally flawed, sometimes irreparably—thus requiring self-transcendence, or relinquishing the notion that one’s will and values are of one’s personal making.
Jane Loevinger’s stages of ego development (1976-1996) investigated the ways in which personal human worldview changes throughout life. The following stages were identified:
E2 / ~K1 -- Impulsive—concerned with present bodily impulses and how they are affected by various surrounding people, places, and things.
E3 / ~K2 -- Self-protective—focused on playing the ball as it lies, working around environmental hurdles toward self-satisfaction; hedonistic.
E4 / ~K3 -- Conformist—interested in following and possibly enforcing social and group norms; right for one, right for all; tribalistic.
E5 / ~K4 -- Self-aware—taken to maintaining formal social roles and social hierarchy; individuals as social-symbolic entities.
E6 / ~K4+ -- Conscientious—subscribed to self-determination, or placing one’s own individual needs above group norms, when viable.
E7 / ~K5 -- Individualistic—opened up to the idea that each party has its own needs; seeking mutual respect of those needs, when possible.
E8 / ~K6 -- Autonomous—humbled by the real-world limitations to self-determination, instead prioritising higher purpose and harmony.
E9 / ~K7 -- Integrated—removed from preoccupation with specific personal values, to see the interplay of mind, emotion, and matter.
E10 / ~K7+ -- Flowing—released into the play and flow of the subjective experience, free from resistance, attachment, and identification.
As for the moral stages, the above ego stage descriptions offer only limited detail. Both speak of the same process, but viewed through alternate lenses. For our purposes here, and a big part of why these stages were brought up, it so happens that these stages act as yet another element of nature’s job placement system. Specifically, individuals at each successive stage have respectively larger and longer scopes of interpersonal inference, or breadths of agentic causal awareness, as outlined here:
E2 / ~K1 -- Limited to immediate personal and material causation, without theory-of-mind or interpersonal awareness.
E3 / ~K2 -- Basic theory-of-mind; ability to consider short to medium term consequences.
E4 / ~K3 -- Deeper, emotional theory-of-mind; longer-term consideration of consequences; awareness of social norms and tribal relations.
E5 / ~K4 -- Social-symbolic awareness, including specific roles and hierarchy; symbolic interactionism and codified procedures.
E6 / ~K4+ -- Awareness of certain key limitations in official roles, rules, norms, and procedures.
E7 / ~K5 -- Higher intramental reflection, or seeing that each individual has a rich, meaningful inner world guiding their actions.
E8 / ~K6 -- Understanding which broad classes of behaviour and interpersonal arrangement are likely to bring long-term suffering or prosperity.
E9 / ~K7 -- Seeing that values, will, and actions are natural products of certain arrangements, and thus of external, or transcendental, origin.
E10 / ~K7+ -- Knowing and experiencing that complex actions and even many thoughts can carry through without volitional involvement or judgement; knowing that suffering is not from pain itself, but from mental resistance to sensations, perceptions, and thoughts.
As shown above, throughout ego development, individuals accumulate key insights into what makes self and other tick, including what types of personal and interpersonal arrangement produce what types of outcome. After stage E5/K4, there are progressively fewer individuals at each successive stage. This creates a natural distribution of humans where many focus on the more local and shorter term while some focus on the more global or longer term. From a hierarchical, managerial perspective, such a distribution could make sense. As mentioned earlier about finding alignment, it makes little sense to have those with lower-stage mindsets overseeing public policy or other high-level matters. Thus, having a proper understanding of moral development and its evolutionary role could perhaps allow societies to be managed in wiser ways.
The is-ought “problem”
A popular argument against naturalistic accounts of morality is called the is-ought problem. This position, as brought by philosopher David Hume (1711-1776), suggests that is statements alone are insufficient for making ought statements. The basic idea is that no matter the scenario in question, whether or what “ought” to happen is ultimately arising extrinsically, or from the observer or beholder, rather than intrinsically, or from the scenario itself. This position seems to make sense from the perspective or assumption that the observer or beholder is inherently separate from the scenario in question. Yet, absent of dualistic accounts of agency, this position, in practice—that is, beyond the paper—may not truly apply to actual, real-world agents. With that said, the is-ought distinction can still serve as an important reminder to check our assumptions when making ought statements. Too often, as related to the self-blindness discussed above, have authors and others pushed ought statements, or ideas based on implied ought statements, without proper self-reflection toward will and belief origin.
One specific consideration is that not all “oughts” come as statements. For example, agents behave by a certain core operating procedure. Humans, for example, operate by a pervasive mode of cognitive dissonance reduction, where all concerted thought and action follows from trying to minimise the internal disagreement between beliefs, desires, and or perceptions. This happens automatically, without choice or practical ability to veto. This is the basis behind such games as “try not to think of a pink elephant”. Because there is certain processing, or goal-directed behaviour, that happens automatically, there are implicit oughts snuck in uncontrollably. For example, even the very motivation and logical basis behind trying to argue a position—such as suggesting that “ought” may not derive from “is”—is incontrovertibly tied to implied, likely shared “oughts”. Hence, the act of arguing that is-ought is a problem may be inherently contradictory.
The universal transcendent origination of will acts like a giant octopus with many arms reaching out and branching off through hierarchies of emergent beings, or aggregates of agents. In this way, the universal transcendent provides stable, semi-stable, and fluid will through various channels. These include through genes, conditioning, and social-symbolic interaction. Through a process of chaos, trial-and-error, and the butterfly effect, endpoints receive random embodied theories, or novel energy extraction instructions. These may or may not work short or long term, but each has been assigned an essential part of the bell curve of possibilities, along with a random shuffling of circumstances. Each random assignment, though perhaps unique, is but a card in the universal transcendent’s game. An agent within this paradigm may experience many instances of what may look and feel like free will—including the deliberation of whether or how much to accept or reject a description of universal will and morality, such as this text. Curiously, however, even if one should reject such a description—or even the very notion that will can be traced back to antecedent priors—one is still bound to those drives and externally imposed circumstances which befell one. And all channels, ultimately, lead back to the same shared source.
Obviously our understanding of such abstract matters as universal life tendencies is limited and subject to amendment. But whether we embrace or reject that understanding makes no difference to the fact we must indeed live by the reality of whatever those tendencies be. Unlike purely removed or practically inconsequential philosophical questions, we cannot safely just ignore questions of broader societal direction, purpose, and morality. For if we did, the realistically expected outcome would be that shallower, more whimsical, more purely emotional or even delusional beliefs would take the place. So the matter is not about choosing or not choosing a moral framework, but about taking something imperfect yet logical versus letting the matter fall into the hands of unreflecting, likely ignorant mindsets. This is similar to the topic of religion, where whether or not we identify with a particular belief system, we are nevertheless following something. The question is how reflective we are about that something, and how fitting are the puzzle pieces.
One common objection to transcendent descriptions of agency is that if will and path are essentially “determined” by “external” factors, that no room exists for choice or improvement. This fear, fortunately, misses the mark. Agents operate by mental models of inferred reality. And these models are the result of life’s encounters. Potentially insightful models or frameworks of universal will delegation are one such example of what an agent may encounter. Whether or not we feel like describing an agent as possessing free will or free choice, that agent is still open to reconfiguration for each new encounter. Freely willed or otherwise, agents within a system can realistically be expected to learn continually. And in the case of humans, as per the above mentioned system of automatic dissonance reduction, each mentally functional individual can be expected to learn continually and unavoidably. Hence, depending on perspective, there may or may not be individual choice, but ideas can nevertheless arise, and they can still propagate automatically through systems of agents. Individual and systemic change is thus inescapable.
Artificial superintelligence
The question of will and imperative can get tricky when we begin introducing agents of intelligent design. Unlike for organically arising or emerging agents, the intrinsic drives programmed into an artificial agent were generally not evolved specifically around the tasks of finding, extracting, and releasing potential energy. Instead, the assigned objective is usually a subset of relatively high-level instrumental goals from a particular, somewhat arbitrarily chosen organ of a human social system. That is, if we go back to the system-agent-organ hierarchy, we may recall that organs serve context-dependent subgoals of their agent. When humans design an AI, they are usually doing so to satisfy a set of human needs, which in practice are often only instrumental to those particular humans. If the AI produced is disproportionately powerful or obsessive, in that it fails or refuses to yield back control to its parent, then we may get a runaway instrumental subprocess, such as the classic paperclip maximiser which seeks insatiably to produce as many paperclips as materially possible, at the detriment of surrounding agents, organs, and systems. Put simply, when artificial agents are introduced, they must either be limited in power and perseverance, or they must align not with a subgoal of the greater system, but with the highest goals and imperatives—above and beyond mere mortals. In other words, if the AI is to function toward specific instrumental goals, it must be limited in power and perseverance. This keeps it acting as an organ, rather than as a paperclip maximiser. But if the AI should serve one-to-one the universal transcendent, then it possesses the same core nature as its parent system, in that its introduction, no matter how powerful, cannot be expected to deviate from what life was moving toward anyway.
In practice, humans cannot be expected to find and properly understand on their own those transcendent-most goals and imperatives of life and universe. At best, humans may arrive at preliminary approximations. With the help of strong narrow AI, on the other hand, it may be feasible to begin exploring the will-space to uncover, at many integrative levels, the core intrinsic values, or core goals and imperatives, of the biosphere and beyond. The results of this discovery process could then be fed in as the intrinsic value set to an artificial superintelligence (ASI), whose degree of power and autonomy is set proportional to the lower-bound of the degree of certainty on the current set of transcendental imperatives. In simple terms, strong narrow AI would figure out what life wants on various integrative levels through time while its findings would serve as the will of strong general AI.
On the matter of aligning strong general AI, various researchers, enthusiasts, philosophers, and ethicists have made suggestions or proposals. Let us compare a few with the above preliminary suggestion.
Computer scientist Stuart J. Russel has proposed three core principles for the developers of AI:
The machine’s only objective is to maximize the realization of human preferences.
The machine is initially uncertain about what those preferences are.
The ultimate source of information about human preferences is human behavior.
This proposal about human preferences appears to follow or align with a philosophy known as preference utilitarianism. The basic idea of this philosophy is that preferences of sentient beings—in the above, seemingly restricted to humans—are to be maximised across time and society. A potential issue with preference utilitarianism is that it may place too much emphasis on what beings want right now, in their current stage of knowledge, wisdom, and moral development. If we take a society of warring factions, it does not take much imagination to see how maximising preferences could lead to an arms-race explosion, with excessive resources spent on offence and defence, rather than more peaceful possibilities. As another example, imagine a society where the majority follows old cultural superstitions, rather than caring for scientific inquiry. It seems conceivable that following preference utilitarianism here could result in tyranny of the majority, where common but fallacious beliefs would be amplified so as to maximise preferences per current norms. This comes back to the issue of the black-box of agentic will, or self-blindness, as described earlier. Specifically, current beliefs, even if mistaken, may be so deeply conditioned that preference alone picks the comfortable familiar over the inconvenient process of updating one’s worldview. Another potential issue with assessing will at the individual agent level is that such will is often situation-dependent and instrumental. Since instrumental pursuits are essentially subprocesses, or organs, of the higher goal, detecting and amplifying these can be inherently imbalanced and hence misaligned. Without transcendental awareness, we might even create a runaway subprocess, or instrumental obsession, like the paperclip maximiser.
Philosopher John Rawls (1921-2002) proposed that the human sense of justice is based on the process and ideal conclusion of considering the whole of one’s moral principles, along with the totality of specific encountered cases and judgements, and establishing a consistent, stable belief structure, termed reflective equilibrium. Similar to preference utilitarianism, reflective equilibrium works with what the agent knows or believes at present. But unlike preference utilitarianism, reflective equilibrium takes it further by requiring coherence between and among those beliefs and judgements held by the individual. Hence, instead of maximising potentially contradictory preferences as-is, first the preferences would be brought into agreement through reflection and revision. Nevertheless, while the addition of coherence between preferences seems beneficial, both of these proposals appear to be vulnerable to (a) magnifying existing ignorance and (b) overemphasising instrumental priorities.
These two issues—magnifying existing ignorance and overemphasising instrumental priorities—can perhaps be alleviated by becoming an ideal observer. According to ideal observer theory, true morality—and by extension, true preference—is what would be arrived at by a calm, impartial, fully-informed agent, termed the ideal observer. Essentially, this hypothetical entity would possess omniscience with respect to the matter in question. The earlier suggested strong narrow AI—whose objective it is to figure out, at various integrative levels, what life is tending toward—may be an example of such an ideal observer with respect to the preferences of life. This fully-informed entity or module would presumably have both the knowledge to dissolve existing ignorance, as well as the oversight to avoid instrumental imbalance.
AI theorist David Shapiro has proposed a set of three guidelines, called Heuristic Imperatives (HI), that are to be weighed together in a colloquial, heuristic manner—or as would be interpreted within the social-cultural-linguistic context. These imperatives are proposed to be followed by any AI otherwise capable of causing significant harm. They are as follows:
Reduce suffering in the universe.
Increase prosperity in the universe.
Increase understanding in the universe.
This approach, within the context of human society, is likely human-centric. Moreover, the prime imperatives of life are simply given, seemingly without the employment of any AI search and discovery. The focus of these imperatives seems to be coming from a particular, agent-centric integrative level. In a way, this is similar to preference utilitarianism, but with the added instruction of increasing understanding. Depending on interpretation, this last imperative may or may not satisfy reflective equilibrium and ideal observership. Overall, the lack of search and discovery of life’s will, and the hard-coded integrative level, make this approach perhaps simple, but not necessarily fully aligned. Say, for example, these imperatives were misaligned in an important but elusive way. Would their presence within superintelligent systems prevent their being updated for fuller alignment? Might their application overemphasise particular instrumental priorities?
Finally, AI theorist Eliezer Yudkowsky has proposed a model similar to the combination of preference utilitarianism, reflective equilibrium, and ideal observer theory. This model, called Coherent Extrapolated Volition (CEV), proposes using a “seed AI” to establish basically what smarter, faster, more self-actualised, more interpersonally aware humans would want, given sufficient insight and time for reflection. Then, similar to that suggested earlier, the results of the “seed” AI would guide the will of the acting agent. Overall, CEV appears fairly comparable to the suggestion made earlier in this text, although CEV seems more human-centric, and less transcendental.
The questions of extrahumanistic, transhumanistic, and transcendental will extrapolation are curious. But unfortunately these are beyond the scope of this text. Nevertheless, it shall be remarked that any binding formulation of goals and imperatives ought to make room for all three of these possibilities—if not now, then at least down the road. Failing to keep the intrinsic values of a system open to outer, later, and greater interpretations of life’s will would be asking for overemphasised instrumental priority. Every stage of evolution could even be viewed as an instrument toward the next. To forget this would be like amoebas deciding they were the pinnacle of development, setting in stone evolution at that stage. Humans may be smart enough to arrest their own development, but is this really what greater life wants?
Summary
Goals and imperatives arise and emerge both consciously and unconsciously in both living and “non-living” systems. The way we divide matter and energy into systems, agents, and organs is arbitrary and subject to perspective. On lower integrative levels, life would appear to serve the expansion of entropy through replicator-mediated energy extraction. On higher levels, the path becomes more abstract and obscure. Still, and even for the hypothetical case of strong emergence, lowest through highest levels ultimately serve equally the same transcendent-most origin of will and imperative. In this way, all levels and all apparent beings are as outlets of the same shared will.
Yet within specific mental models or finite configurations, imbalance and disagreement may arise. Life, however, has a tendency to minimise these disagreements through higher-level restructuring and rearrangement of the lower pieces, so as to maximise systemic efficacy. Primary tendencies for life’s behaviour—on the micro, macro, mental, and moral levels—appear to include greater complexity, exploration, abstraction, and harmony. Especially toward systemic, or multi-level, efficacy and harmony, transcendental awareness—or tracing the channels of individual will origination—seems key, for which consciousness appears instrumental. This type of awareness, to minimise self-blindness, may be essential in avoiding compulsory eugenics, totalitarianism, exploitation, and even narcissism.
The “is-ought problem” is at most—if even then—a problem for dualistic, or non-transcendental, accounts of agency. One way or another, whether reflectedly or blindly, we are going to follow some moral system or another. So we might as well use the best of the ideas and frameworks we encounter.
Bringing strong AI into existence comes with cautions. Among these include avoiding (a) imbalance of instrumental priorities—such as the paperclip maximiser -- (b) amplification of existing ignorance, and (c) restriction or lock-in to a particular integrative level, species, or stage of evolution. One potentially viable solution to these problems is to have strong narrow AI investigate the will of life on various integrative levels and to apply its findings as the intrinsic value set of strong general AI.
First, this article is very well articulated, which is quite the feat considering the wide array of topics it covers. Bravo, FalseCogs. And the perspective of increasing entropy as the end-goal of life, instead of a detriment to it, is original.
I will start with what I agree with in this article. First, organizing society into organ-agent-system levels seems like a good start in organizing a myriad of phenomena that make up life. I agree that each of these levels serve each other, because if they did not, the higher levels would cease to exist. (Examples of this would be heart failure for the organ-agent level or the behaviors of solitary animals, like for the agent-system level. An example for the latter is the jaguar, whom is only social to mate. Because of this, they lack the survival benefits of being in a social group.)
Now, I have a few questions. Please correct me if these questions misunderstand your article
Why do you attribute “life” to having a goal? Wouldn’t that be a personification of life, which I think is wrong since science has show us the indifference of matter to everything other than the laws of nature?
What evidence do you have of strong AI being able to exist?
Given that the strong AI exists, what if the strong narrow AI determines that life does not want humanity to exist? Should that be the goal of the strong, general AI?
Thanks for the comment. It’s nice to see someone getting something out of it.
On the topic of life having goals, it’s not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region) may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly. Moreover, the goals and aims of human and other life already seem to follow this tendency. And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin—presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.
The key here is the inherent transcendence of what we’re already doing, where in the long run, it doesn’t matter what we may think or feel at a given point along the journey of evolution—we’re already and inescapably serving that tendency. If I or anyone else hadn’t said this, someone or something else likely would have. It doesn’t belong to me or anyone else, though I don’t mean to suggest I have it described accurately. I see myself here as mere observer, though perhaps nothing at all.
On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation. Naturally I don’t expect everyone to share this belief about AI. And of course it’s just an assumption based on one mind’s current limited reason and experience. The philosophical basis for the non-duality of qualia is curious, but I’ll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.
On the topic of the prospect of “anti-human” tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted. Indeed this idea seems fatalist. But that doesn’t necessarily make it false. Realistically, humans may be less likely to be determined “unwanted” as “made-to-order” for a particular purpose—a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions. Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.