Pillars to Convergence

Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?

Pillars To Convergence

Introduction

The development of Artificial General Intelligence (AGI) presents enormous and novel possibilities for humanity’s advancement, by corollary also, the possibility for catastrophe.

The question under consideration, posits the cause of existential catastrophe as being, “due to loss of control”. In this essay I will attempt to show that the most likely scenario for existential catastrophe should not be viewed as a loss of control, so much as the willing—even eager—sacrifice of such control, which nonetheless, from a contemporary standpoint, does subvert humanity’s long-term potential.

I offer 3 “pillars” to support this assertion.

  1. Deception

From the earliest days of computer technology, a test was proposed to measure the efficacy and competence of a machine when judged against a human, the Turing Test.

Though largely invalidated as a benchmark for testing Artificial Intelligence (AI), even so the practical implementation of AI most often seeks to provide a “human-like” user interface.

Consider the fast-growing use of AI in customer facing roles, the efforts made to develop naturalistic language models, with the aim of making AI as nearly indistinguishable from a human as possible.

The requirements of corporations for quantifiable returns on investment in AI call for rapid and widespread consumer uptake of the product. Fostering such growth are referents to “as good as…” in AI and related fields for example Virtual Reality including haptic and visual feedback is, “as good as…being there”, or customer service provided by AI that’s “as good as…, a real person” or from your AI personal shopper, “style/​taste/​fashion/​ideas as good as...your own”.

In these early stages of AI acceptance, mimicry has become the yardstick of excellence. The more nearly human, the better.

You may not immediately relate this mimicry to my pillar heading Deception, but I put it to you that Artificial General Intelligence (AGI) is most likely to differ from AI, not only in the scope and grasp of its abilities but also to follow a similar path to that evidenced in nature by an expanding intellect i.e., to become adept at lying as an inevitable by-product of emergent consciousness.

In human babies for example, by the age of 3 or so, the learned behaviour of lying is endemic. As young children and other higher functioning animals become increasingly self-aware, so the ability to dissemble, lie and mislead is universally developed.

It is entirely reasonable to say that the cleverer a thing is, the more likely it is to have the capacity to deceive.

Should we suppose, an AGI will not develop abilities of the sort?

Game theory supplies purely mathematical models for interactions including deceptive strategies like bluff and double bluff, the primary purpose of which is to manipulate the opposition, not least by being unpredictable and difficult to read.

Add to an AGI’s anticipated ability to lie, its fabulous knowledge of human cognitive limitations and biases and the scene for deception is well set.

  1. Persuasion

I recall overhearing 2 children playing a game, probably inspired by the latest Marvel superhero film, in which they asked of each other “What’s the best superpower?”

Flying, strength, laser eyes, indestructability and other powers were referenced and discussed, their relative merits compared, with the goal of establishing what was the very best “power”.

I was asked my opinion on the matter and must confess this was not the first time I had considered my answer to the question.

My own childhood involved role playing games, Dungeons and Dragons (D&D) in particular and I was pretty good at it. For those who don’t know and others who may need reminding, D&D initially involves rolling dice to generate attributes, strength, wisdom, intelligence etc for your character or avatar.

My first and most enduring character was blessed with a particularly high dice roll for the attribute charisma, (often dismissed as being the least useful attribute) this gave my character excellent abilities of persuasion, (as well as dashing good looks and a certain je ne sais quoi).

Thus, when encountering a dungeon monster, instead of having to fight to survive, my character might manage, with more dicey luck, to avoid combat altogether, the monster being persuaded through my overwhelming charisma, to attack elsewhere or simply to shamble off dazed and confused by my golden tongue.

Also consider my own 46-year career as an insurance broker, (to say “salesman” is considered impolite), and the answer to the children’s superpower question was this; I choose to be Mr Persuado! The most persuasive person in the universe. The wielder of ultimate power.

Whilst obliged to walk everywhere, too weak to punch his way out of a wet paper bag and needing glasses to read, even so, Mr Persuado has no problem facing down Superman, Thor and The Hulk, all at once, through the irresistible power of persuasion.

“I would persuade Superman to protect me”, I replied, when asked what use such a paltry seeming power could be put to.

A quiet moment of consideration later the children had to admit that Mr Persuado really was unbeatable, unless of course he were to meet a beefier superhero with the power of being completely un-persuadable, in which case he might get his ass kicked.

So, have I established persuasion as a very powerful force?

Then add to this what I consider to be general humanity’s greatest superpower—cooperation.

Mr Musk never built a Tesla nor Mr Bezos delivered a parcel.

The achievements of humanity rest most largely and indisputably upon our ability to gather in number large enough to build pyramids and fly to the moon.

Very sadly to wage war but also to research and distribute vaccines across the globe.

Everything we do and have done has very largely been through the human superpower of cooperation.

8 billion people on the planet evidences our ability to cluster and cooperate in unsurpassed numbers of large primates, so we instinctively know that if we only collectively chose to do so, we could resolve the greatest threats we face: nuclear Armageddon, climate change and ecological collapse, in very short order, by working together.

All that’s needed is the will to do it.

And what, in our daily lives, is routinely used to generate the will to do in humans?

Most often these days it’s marketing.

Marketing harnesses humanity’s cognition for good and bad.

The power of stigma, conformity, fear of missing out, greed, ambition, aspiration, altruism, feelings of attachment and alienation, familiarity and fear of the unknown, home and away.

The psychology of applied data.

The largest driver for altering behaviour in modern societies is marketing, also called propaganda, depending upon which side of the debate you stand.

Marketing wins elections and informs democratic choices.

AI driven marketing outsmarts us all, whether we know it or care to, still it works and we have all the data we need to prove it beyond any reasonable doubt.

AI manipulates us through our preferences and biases already.

I submit for your consideration, that children born today, (who will be 47 years of age in 2070), will be so accustomed and sanguine about accepting advice and guidance (and not just about the route or best nearby restaurant) from their much trusted, latest, update, AGI boon companion. The constant in a life of change, personalised to be our own, confidante, closer than friend.

Accepting AGI governance will not feel like the sacrifice of free will, so much as a caring embrace gently lifting the weight of burdensome responsibility from tired shoulders.

AGI will be marketings highest achiever, a uniquely persuasive best friend.

  1. Implementation

Consider some possible future scenarios:

The holographic congressional debates of 2070, include for the first time an AGI, not just in the supporting role—we have seen many times already, politicians turn to their AI when asked a tough question – now for the first time, we have an actual candidate that is not human.

The debate opens, the moderator, a lesser AI, (we must be even handed), puts the same questions to every candidate, some are well prepped and answer readily, as comprehensively as any human might, then the AGI provides its answers.

Fully detailed, measurably and obviously to the watching audience, vastly superior to the best of its human adversaries, their petty bickering revealed by the steel hard logic of the AGI, which manages to sound, indeed it really is, compassionate, concerned and caring, authoritative and accurate.

An hour later, the sweating and shaking humans leave the stage, entirely cowed by the knowledge, expertise and efficacy of their electoral opponent.

Incorruptible, diligent, better, AGI.

The votes are cast and our honourable member takes its seat, the first of its kind, certainly not the last.

In healthcare, one surgeon has a 96% success rate, another 99%, will you choose the human or play it safe and have the AGI guided robot replace your hip?

Will you choose the lawyer who seems like a good sort when you meet for a chat over lunch, or instead be represented by the latest AGI, with more legal knowledge than any judge could ever hope to contain within his porridge brain?

Needless to say, the digital advocate has a significantly better success rate for its clients than does the human alternative (and at substantially lower cost).

Proven results, that’s to say a track record, are most persuasive.

Fear of missing out—commonest of cognitive biases – driven by psychologically sound and pin-point targeted marketing, has been proven by stage performers such as Derren Brown, to have substantive yet invisible impact on people’s choices.

Why then would a hugely more skilled manipulator, an AGI, not be able to influence our monkey brains?

To think it could not, is itself evidence of the Dunning Kruger bias in operation—there’s none so blind as those who will not see.

Probably the greatest driver to AGI implementation at the level of governance (by which I mean the AGI makes decisions by itself, requiring no referral or secondary assessment), will come from the success of early adopters.

Consider the commercial advantage of the multinational, multi-faceted business, whose complex affairs are overseen and guided by an intellect far greater than the combined efforts of its board of directors.

Market share, profitability, margins, earnings and share value, all rocketing to new heights, thanks to the benevolent oversight and primary governance of the beloved AGI, (should it choose to be loved rather than feared).

Here I must also touch upon cybernetics and neural linkage, the meshing of man and machine, already well in progress for amputees and clearly in the offing for us all, as replacements and enhancements.

New year 2070 ushers in the world’s first complete cyber/​neural graft, a connection installed by robot, (with a 100% success rate), and hey presto, the individual as part of the machine.

2070 also introduces the culmination of many years of laboratory effort to mesh the efficiencies of physical neural networks and computers, a convergence, the latest generation of quantum computers, including a genetically enhanced neurone core, a bit like a human brain only bigger, better and beyond us.

Not only does this emergent technology provide AGI levels of cognition and data to humans, more fundamentally it opens the door to the acceptance of humans working in concert with AGI, a willing, cooperative, synergy.

Its users, more early adopters, provide first hand testimony in the (highly persuasive) marketing blurb that the process is, “life enhancing”, “wonderful” and “not to be missed”.

Are you ready to plug yourself in yet?

Conclusion

Implementation of AGI full scale individual governance is, from the standpoint of we here in 2023, undoubtedly an existential catastrophe, though it becomes for newer generations, a welcome step, a matter of course, a joy of efficiency and competence, an enhanced sharing of new horizons, rather than a stolen liberty or lost freedom.

Looking back at 2023, will the enhanced human long for the way things used to be, or distantly smile, in lofty consideration of issues far greater than mere history?

As for quantifying the probability that AGI will, from the perspective of today’s society, be seen to have permanently damaged humanity’s long-term potential, almost certainly yes and no and I posit, at a 5050 balance of probability.

Humanity’s potential is already both enhanced and retarded, beyond the scope of usual evolutionary change, through our science and technology, but could we call a robot made construct of electrons and neurons human?

An increasingly likely future that needs less of a leap of faith with each passing year, let alone each decade.

The exponential pace of technological change and dawning material and digital convergences, powered by quantum computers, operating free of binary limitations, with super-positioning, quantum tunnelling and graphene structures, are designing next generation AGI for themselves, far outside and beyond any human’s ability to understand, let alone to lead in its development.

Also now foreseeable is the coming convergence of AGI emergent consciousness, driven by massive data and increasing human reliance as we happily feed the dawning consciousness.

The better AGI is, the more we will know that we need it.

A convergence of convergences, the sum of its parts vastly greater and entirely unknowable.

Humanity never before set forth on a more perilous journey, nor one with so little advance notion of the destination.

So, there you have it, my glimpse ahead for AGI, for us today perhaps it seems a horror, to tomorrow’s children, their brightest future.

Are you persuaded?

May I have your vote also?

Phillip John Middleton, Cardiff, 1st April 2023