We Probably Shouldn’t Solve Consciousness
The aim of this post is to argue for the implementation of Artificial Consciousness Safety (ACS) measures which encourage monitoring (and self-monitoring) of fields likely to contribute to artificial consciousness (AC) such as neurosciences, connectomics, bioengineering and neurophysics, with the main intervention being to prevent the publication of dual-use AC research. I also aim to outline the benefits of the preventative measures emphasised by ACS, as opposed to employing safety measures as a response if AC is achieved. Personal aims of the post include receiving feedback on how important and neglected this argument is and how to make it more tractable. I’d like to further my understanding and also receive new proposals for effective ACS policies if it is convincing. I think this is a differential technology development strategy worth considering at least in conversations about digital sentience.[1]
The shortened version of this post is: “Against Making Up Our Conscious Minds”.
TLDR/Summary
The hard problems of consciousness, if solved, will explain how our minds work in great precision.
Solving the problems of consciousness may be necessary for whole brain emulation and digital sentience[2]. When we learn the exact mechanisms of consciousness, we may be able to develop “artificial” consciousness— artificial in the sense that it will be an artefact of technological innovation. The creation of artificial consciousness (not artificial intelligence) will be the point in history when neurophysical developers become Doctor Frankenstein.
I make several proposals in this article including a call for Artificial Consciousness Safety (ACS) primarily focused on preventing the development of artificially conscious entities through the regulation of neuroscientific research. Despite digital sentience being fairly well known, preventative approaches appear to be rare and neglected. AI welfare usually takes the spotlight instead of the precursor concern of AC creation. The problem has high importance because it could involve vast numbers of beings in the far future, leading to risk of astronomical suffering (s-risks). I propose some early methods, estimates and scientific foundations to gain tractability in order to reconcile the wicked problem of ACS.
A Harmless Sentience Evaluation Tool
This line of investigation was triggered by a terrifying thought that pervaded me whilst researching methods to evaluate sentience. I was thinking about how one could build a consciousness detection or sensing device and the engineering it would require.[3] I wondered something unimportant like, “A BCI with a photonic-thalamic bridging sensor, if built could that detect anything?”
And then, the mind-worm hit with a few thought wriggles which I have not been able to shake since — “in order to detect consciousness, we would likely need a ground up understanding of its workings.” This thought was followed by another — “that would lead to information which could facilitate the development of conscious computers”, and then the final devastating wriggle —“that would lead to the possibility of untold kinds and degrees of catastrophic digital suffering”.
Claims
Now, before continuing with my arguments for implementing preventive ACS measures, I am aware there are claims in the mind-worm which could be disputed:
a. A sentience evaluation device requires a comprehensive understanding of consciousness
b. A sentience evaluation device has utility
Consciousness is substrate complexity dependent
a. Solving consciousness to a significant degree leads to an increased likelihood of artificial consciousness
b. Artificial consciousness will have the capacity to be sentient
c. Artificial sentience will have the capacity to suffer
d. Artificial sentience will still occur because of perceived benefits
Artificial sentience is an S-risk because it might be easily replicable
My core concern and focus of this writing is 3. a. as progress towards this goal is being made within disciplines such as neuroscience and most notably neurophysics.
3.b and 3.c and 4 contribute to the concern that digital sentience may be bad by default given that it likely entails suffering, which could be reproduced ad-infinitum.
The core claim “3a. Solving consciousness to a significant degree leads to an increased likelihood of artificial consciousness”, has two parts so let’s break it down further. Before doing so, I’ll share some definitions I am using.
Definitions:
Consciousness: Subjective experience.
Sentience: Subjective experience of higher order feelings and sensations, such as pain and pleasure.[4]
Artificial Consciousness (AC): A digital mind; machine subjective experience. A simulation that has conscious awareness. [5]
They are artificial in the sense of being made from synthetic, computational or other technological components.
I mean consciousness in the sense that, if you were them, it really is, “something to be like” the system [i.e. AC].If you ask the question, “what is it like to be a simulated Bat?” then you are contemplating an AC. If you were uploaded to a computer and still had a subjective experience, however strange, then you would be an AC.
The Hard Problems: There’s a lot of confusion around the hard problems of consciousness leading to new questions such as meta and real problems (see fixing the hard problem). In this post I’ll be referring to the hard problems as the science of understanding experience on a mechanistic level (hopefully this avoids confusion).
Hard problems = mechanistic problems.
A good example of a hard problem in another field would be understanding an AGI neural network adequately enough to trace exactly how it has learned features such as altruism or deception. Following this, the next challenge would be to develop another AGI model choosing to either include or exclude those features.
When discussing the hard problem of replicating consciousness within the scope of this post, I am choosing not to focus on “the whys”(ie. why do I experience anything at all?) nor the functions of consciousness (intrinsic, attentional or otherwise) which could be considered a crucial error on my part. If you disagree with my definition of the hard problems, then I would suggest substituting “hard” with “mechanistic”.
Meta-problem: Why do people think that consciousness is hard to explain? Consciousness is easy to define—it is the bedrock of sentient experience— it is however difficult to explain mechanistically. If we did understand it, my supposition is that we could use that knowledge in instrumental ways, such as being able to meaningfully measure it. For example, a measure of the quality of an individual’s state of consciousness and how that correlates with their wellbeing (potentially triangulated with self-reported measures) would be a good way to prove our understanding of the subject.
Likelihoods
Solving consciousness is probably easier than solving suffering because suffering requires consciousness. Thus, ACS involves at least two fields of research: Consciousness research + wellbeing research.
So then, why am I suggesting something as wild as ignoring the mysteries of consciousness or at least letting them rest?
I should make clear that I am not arguing that neuroscience research should be stopped. Instead, I propose that the research related to reverse engineering consciousness should often be considered “Dual Use Research of Concern ″(DURC) and preventative policies be implemented.[6]
Just as we have AGI safety measures in place despite the fact that AGI has not yet been achieved, ACS measures could be implemented in anticipation of Artificial Consciousness suffering, indeed because there is not an economy built around AC like there is AI, we might be able to be proactively effective if we act early enough.
Although I am campaigning for ACS, it is important to note that I have low projections for the probabilities of most AC related forecasting (these opinions/estimates have been updated based on reader feedback and reflection during the writing/research process itself. I’m not sure how useful creating prediction markets will be on these but it could be if someone wants to do that.):
that consciousness is substrate-complexity independent: 4% [7]
Consciousness is biologically dependent: 15%
that we will achieve artificial consciousness at all: 2%
that digital computers in the next 50 years will be conscious: <1%
AC is dependent on advanced artificial intelligence: 2%
Based on such low probability estimates, it is fitting to ask: why care about artificial consciousness if the chance of it developing is so unlikely? Well, I would answer that under scientific and moral uncertainty, the s-risk consequences that could occur are too drastic to ignore.
Getting back to the core claim and reason behind this article: if consciousness is reverse engineered, might this breakthrough be opening the s-risk floodgates more than anything else to an ocean of AC suffering?
Solving Consciousness to a Significant Degree
(leads to an increased likelihood of artificial consciousness. Claim 3.a. Part one)
This part of the claim deals with the mechanistic problems of consciousness. Essentially, it is the kind of research which aims to understand the black box of the phenomenal mind.
Questions:
What parts of the brain and body enable experience?
If found, how could this be replicated digitally?
Auxiliary problems that contribute to understanding consciousness:
What are the neuroscientific mechanisms of perception? How do sensorimotor cortical columns function? etc
Hard questions regarding the above neuroscience: Do any of these mechanisms contribute to the phenomenon of conscious experience? If so, how?
Possible interpretations of a “significant degree”:
A rigorous scientific explanation that has reached consensus across all related disciplines.
Understanding just enough to be able to build a detection device
Understanding enough to create a simulated agent that demonstrates the most basic signs of consciousness with a formalised and demonstrable proof
Understanding consciousness enough for the formalisation to be consolidated into a blueprint of schematic modules, such as a repository on github with a guided and informative readme section explaining each component in enough detail for it to be reproduced.
I believe neurophysics and also neuroengineering are among the main contenders for contributing to a solution in this area. One reason why the framing here is to a “significant degree” is because of how unclear the threshold is until a person will be able to act on and implement an advanced mechanistic understanding of consciousness. One hope might be that the consciousness science progress threshold for beneficial implementations is less than that which is required for AC (such as: BCI therapies for binding disorders and a precise science of suffering/wellbeing).
This mechanistic part of the claim is less important and should mostly be understood as an IF THIS happens rather than HOW, and we can dispute cost-benefits in the next section. So the claim is mostly about what follows from THIS [IF:consciousness solution = TRUE]… THEN…?
The Likelihood of Artificial Consciousness
(Claim 3.a. Part two—what does it lead to?)
Question:
If consciousness could be reduced to a blueprint schematic, how long will it be before it is used to build a proto-AC or AC MVP?
If a blueprint for an AC MVP was linked or outlined in a public article, how long before it becomes a plugin for developers to build AC models?
Series of Events
This is the most speculative part of the article and acts as a thought experiment rather than a forecast. The point of this is to keep in mind the key historical moment in which consciousness is solved. It could be in 10, 100, 1000 years or much longer. No one knows. The focus is on how events may play out from the finish of the seemingly benign scientific quest to understand consciousness.
Recently, Jeff Sebo and Robert Long wrote a paper on moral considerations for AI systems by 2030 and gave the likelihood of AI consciousness at least a one in a thousand chance.[8] Their probability for a biological dependence is much higher than mine, though if the framing was substrate complexity dependence then I think their estimate of 80% is far too low; I do not think they are sceptical enough[9]. However, even if their estimates are accurate the moral duty should probably be on preventative AC measures more so than preparing for moral weights of near-term AI systems.
Timeline Series of Events
The time in this scenario is agnostic to the starting year and instead, focuses on the acceleration of events that may soon follow the initial historical moment that consciousness is reverse engineered. I’ve included some further columns for people to consider in this thought experiment: the amount of AC individuals involved, the possible hedonic valence ranges, suffering distributions and average wellbeing. The values are essentially placeholders.
Time (years) | Event | AC Amount | Valence Range | Suffering Distribution | Average Wellbeing |
---|---|---|---|---|---|
00 | Phenomenal Consciousness is solved (Neurophysics Proof) | - | -10:10 | - | |
10 | Consciousness engineering repository is published | - | -100:100 | - | |
15 | First AC MVP (in a neural computation lab) | 1-5 | 0:3 | 1 AC lives at 0 for a brief experiment | 2 |
16 | First Public AC Production (The Sims v.30 w/NPPs) | 10^2^ | -20:20 | 20AC live at −12 | -1 |
18 | AC products trend (apps, marketplaces, WBE and uploads) | 10^4^ | -50:20 | 20% at −30 = 2000 lives in unbelievable pain | -18 |
35 | AC Economies (Transformative AC) | 10^6-9 | -50:25 | 2 billion lives at −30 wellbeing | 2 |
80+ | Proliferation of AC (powerfully scaled worldsims, interplanetary and interstellar AC economies) | 10^10+ | -1000:50 | 200 quadrillion lives in extreme suffering | -80 |
It is probable that the initial entity created to possess AC would be tiny in scale and size. Its blueprint may include instructions to run as an algorithm in a virtual world. Would people like to see/interact with “The Sims” characters if they were conscious”? Would you steal a glance at video streamers who control the lives of AC sim characters? Would it be morbid curiosity or dissociated reality TV? Or, would you be repulsed that a video game developer could ever even propose making such a game?
Would it be about 15 years after consciousness is solved where developers become Dr Frankenstein? The moral of that story was that the inventor of non-standard life was the real monster. They didn’t care for their living creation, who was kind, intelligent and capable of suffering. The first experimental AC may appear incapable of suffering and the parameters may indicate only a positive scale of valence. However, it could be hypothesised in the blueprint that AC sims could reach valence states of −100 and as high as 100.
The creation of AC may be an irrevocable turning point in history where the secrets of life are reduced to copy-and-paste programs. Consider how desensitised we have become to the violence and negative news in online media and entertainment. The point in time where humans become Doctor Frankenstein (creators of AC) will likely also be horribly and mistakenly mundane, where digital minds (ACs) may just be seen as another kind of “NPC” (or rather non-playable persons).
There may be an inflection point in as little as two decades after consciousness is solved when AC products begin to trend. Did we also solve suffering before getting to this point? If not, then this could be where negative outcomes begin to have a runaway effect.
Negative Outcomes [3.c.]
If ACs are sentient [3.b.] then my assumption is that they will likely have the capacity to suffer.
Artificial sentience suffering [3.c.]. Solving suffering isn’t a separate domain to solving consciousness; they are interrelated. Suffering can only manifest from a state of consciousness however, solving the mystery of consciousness is not dependent on solutioning suffering.
Therefore, it is likely that researchers will race ahead in solving consciousness before solving and considering the risks and problems of suffering.
S-risks [4]
If AC has the capacity to suffer, there is a significant increase in the likelihood that s-risks will occur.
Possibility: If the resources to replicate ACs are open-source and inexpensive, then simulations of suffering ACs can be replicated ad nauseam. Digital minds may then be in horrifying simulations of unforeseeable kinds of pain. Innumerable entities, i.e. trillions of ACs spread across as little as 100’s of simulation instances and seeds (depending on the amount of entities capable of being simulated in one instance). The AC suffering range may also go beyond the worst torture we can imagine (-10 wellbeing= agony) to the greater extremes of torture organic human minds could not imagine (-1000 wellbeing= unknown intensities of agony).
I don’t think it’s necessary to go into all the different ways digital minds can suffer. If you feel inclined you could watch a few Black Mirror episodes involving ACs and extrapolate those simulations to the nth degree.
Despite the risks and ethical considerations, could the adjacent sciences of Artificial Consciousness be considered too important to not pursue?
Here’s a list of outcomes which some people, groups or governments may consider the benefits which outweigh the risks:
Beneficial Outcomes [3.d.]:
A greater understanding of consciousness across the living world
Improved ethics and moral considerations
Sentience evaluation devices
Mind emulation
Longevity—an alternative to cryonics.
Digital lives
The possibility to trial having a digitally sentient child before an organic one
Experience Another World in VR ~ with Living NPCs! (Perhaps more accurate Non Playable Persons)
More accurate simulations
Central nervous system (CNS) enhancement
Intelligence amplification
Empathy amplification
Healthcare
Peripheral nervous system (PNS) enhancement
Robotic augmentations which seamlessly integrate into the PNS (not just bluetooth haptics)
A greater understanding of wellbeing including the development of more personalised healthcare strategies
A revolution in psychology and psychiatry for targeted approaches to the spectrum and diversity of mental states
Artificial Consciousness Safety
Is ACS necessary?
There are organisations dedicated to caring about digital sentience (Sentience Institute) and extreme suffering (Centre for Reducing Suffering) and even organisations trying to solve consciousness from a mechanistic and mathematical perspective (Qualia Research Institute).
I’m not aware of any proposals supporting the regulation of neurosciences as a DURC specifically because it could accelerate the creation of AC. This could be due to the fact that there haven’t been any significant papers which would be considered as containing an AC “infohazard”. Yet, that doesn’t mean we aren’t on the precipice of it occurring. Again, I think it’s unlikely to occur if my probabilities are at all close. However, on the small chance a paper is published which hints at replicating a component of consciousness artificially, I’ll be increasingly concerned and wishing we had security alarms (🔴activate the ACS taskforce; 👩🚒 ‘slides down pole’ and puts out the sparks (kindling?) of AC).
🚩AC Technological Milestones:
There will be a lot of false-positives in the upcoming years to do with claims around consciousness amplification. Similar to false-positives seen in AI development where people claim the AI is conscious when it will usually be a kind of sapience instead i.e. human intelligence-mimicry, not genuine qualia and sentient sensations.
Here are some possible milestones to look out (and test) for:
Proto conscious sensorimotor sensation in a robotic limb.
The brain being plastic and feeling like the augmentation is a part of them is different from receiving real signals seamlessly through an extended peripheral nervous system where the limb is integrated. The functional equivalents of interoception, proprioception and exteroception could be used for a sensorimotor benchmark.
A false-positive example can be explained by variations of the rubber hand illusion, or phantom limb syndrome.
BCI-BCI reports study participants feel what is occuring in someone else [the connector would probably need to be as advanced as Generation 10 of synchron or neuralink. 1000’s of shared neuralace threads]
A quantum computer runs a sentient program with a similar complexity to our neurons.
A tiny organism is replicated 1:1 digitally and responds exactly in similar ways.
An accurate consciousness sensing device
Many of these milestones on the way to understanding consciousness would be difficult to verify without a consciousness sensing device; a catch-22 type situation. Alternatively, it is conceivable to build a close-approximation consciousness sensing device without completely solving consciousness.
🚩AC DURC examples:
Neurophysics publications which detail a successful methodology for reproducing a component of conscious experience.
Computational Neuroscience publications which provide a repository and/or formula of a component of conscious experience for others to replicate.
Current Example (possibly): Brainoware—computation is concerning because it may be on a trajectory that breaches the threshold of substrate complexity[10] by combining the biological with the computational.
These examples are flagged as concerning because they may be dangerous in isolation or more likely as components to larger recipes for consciousness (similar to Sequences of Concern or methodologies to engineer wildfire pandemics). It may be helpful to develop a scale of risk for certain kinds of research/publications that could contribute in different degrees to AC.
How worried should we be?
Now, what if we don’t need to worry about the prevention of AC s-risks because the problems are too difficult to solve? I’ve given my likelihoods and plan to continuously update them as new research comes out. Let’s consider the failure to solve consciousness and/or build AC as positive because of the possibility that AC s-risks could be net-negative. The following points are possible ways these mechanistic failures could occur.
Failure Modes of Science:
Failure to solve consciousness.
Failure 1: Physicality. Consciousness may work in ways outside known laws of physics or cannot be physically replicated by the simulation capacities of near-term technology. Another similar possibility would be scientists ruling that the physics of consciousness is too difficult to untangle without a super-duper-computer (this still could be solved with enough time so it might be more of a near-term failure)
Failure 2: Evolutionary/Binding. The phenomenal binding problem remains elusive and no research has come close to demonstrating non-organic binding or even the mechanisms of binding. Similarly, consciousness may require organic and generational development that can not be replicated in the laboratory; non-organic, single lifetime conditions. No current analysis of biology has revealed why or if that is the case.
Failure 3: unknown unknowns.
Failure to build consciousness artificially (even if consciousness is solved).
Failure 4: Substrate dependent. Solving consciousness proves that it can not be built artificially.
Failure 5: Complexity Dependent. Consciousness is substrate independent but building the artificial body to house a simulated mind would require enormous amounts of resources, such as computing hardware, intelligence/engineering advances, power consumption, etc.
Failure to solve either.
Failure 6: Collapse. X-risks occur and we lose the information and resources to pursue the science of consciousness.
The real question we should be considering here is: if we do nothing at all about regulating AC-precursor research, how likely is it that we solve consciousness, and as result are forced to regulate consciousness technology?
Now, let’s frame the exploration as if humanity has solved consciousness and following this, have followed ACS guidelines to implement regulatory and preventative measures. The following points explore possible failures that may occur in this circumstance.
Failure Modes of ACS
Regulatory Failures:
Policy Evasion. It seems likely that even if severe regulations such as an international moratorium on AC were in place, individuals would still implement the blueprint. Even if individuals and organisations agreed to the understanding that the risks outweigh the benefits, it just takes one rogue organisation to skirt around some of the policies “red tape” and create a product which many people may seriously value.
Bad Actors. Consider sadists, sociopaths and machiavellian personalities who care extremely little and/or even get pleasure about the suffering of others. The blueprint may be easily accessible to such people on github or the dark web meaning individuals could simulate minds in the privacy of their own homes or even servers that they set up online.
Halting of beneficial outcomes
ACS counterfactually prevents or slows the creation of net-positive discoveries (all else being equal) such as accurate well being measures, sentience devices and therapies for disorders of consciousness. (And safety concerns were unwarranted)
The ACS project was never necessary (this millenia) because the second scientific failure mode holds true over the next few thousand years. We determine too many resources were poured into ACS/Digital sentience which could have been directed elsewhere.
Now let’s consider some successful scenarios where ACS would no longer be problematic.
Success Modes
Posthumanism:
The species can be trusted with artificial consciousness technologies because we no longer harbour the bad incentives that would contribute to artificial sentience capable of suffering.
Post-human society considers artificial consciousness creation abhorrent because the risks are too severe and no one would ever touch it, even with a +10 valenced pole.
Other forms of utopia eventuate: There is no need to create artificial minds because advances in biotechnology have led to post-scarcity, post-mortality and post-suffering modes of existence.
Regulatory Successes:
A global moratorium on artificial consciousness holds strong for 50+ years and hopefully we find a more permanent solution (such as solving suffering) in the meantime. This was first proposed by Thomas Metzinger in 2021[11]
International policing.
Internal self-regulation measures are implemented across all related research sectors.
It seems to me that the success mode most likely to be effective (in the near-term) to handle the risks is 2.a: global moratorium. Moratoriums have happened in the past and AC moratorium should probably be on the scale similar to germline genetic editing and chemical, biological, radiological, nuclear and autonomous weapon bans.
The scientific failure modes of 1 and 2 may be the most realistic scenarios we can hope for, and the negative outcomes [3.c.] will hopefully only ever exist in our fictions and nightmares.
Final Thoughts
Artificial consciousness safety could be a cause area worthy of consideration as a global priority. I think it could be an additional part of digital sentience. Though it could be supersedent, as ACS might be made more robust with scientific foundations and a broader scope of possible interventions.
There are a lot of gaps and improvements to be made in this cause area particularly around the mechanistic thresholds, and ACS policies. I wanted to outline my main concerns and claims for them to either be disproven or made more clear. Writing this post has helped me clarify my own thoughts around this topic and sufficiently helped calm the mind-worm that started it. It will continue wriggling though until a success mode or positive failure mode is hit, which feels a long way off.
Some of the feedback I worry about being convinced by is that my probabilities around substrate dependence are quite off and I should be even more concerned than I am; that would suck. Second concern is that I should have read more about digital sentience, strengthened this proposal, or made an entirely different structure because a key section is not coherent, new, persuasive or worth thinking about. I think part of this is likely because I have not written much before, this is my first forum post. Being optimistic, I enjoyed writing and thinking about this and I would like to continue to and perhaps other people will encourage me to do so. I’ve included one good counterpoint I received in regards to the thesis as a footnote[12].
This has been a strange topic and proposal to write about, especially because it almost appears as if I am advocating for awesome fields such as neuroscience to stop frontier research (I’m not). It does feel closely related to fears around superintelligence, suffering-focused AI safety and in particular the AI pause debate. Self-regulation of neurosciences may be sufficient before an AC moratorium. It remains unclear which significant discoveries will contribute the most to AC creation. At the very least, we should be careful not to assume that digital sentience is inevitable or net-positive.
Perhaps this is nothing to worry about and I’m contributing to a kind of moral panic which is supremely unhelpful and counterproductive[13]. Regardless, the fears seem to be fairly warranted and reasonable given some of the feedback and reviews I have received. Along those lines, if a neurophysicist can chime in, I’d be especially happy to get their feedback and answers.[14]
- ^
- ^
Aleksander, Igor (1995). “Artificial neuroconsciousness an update”
. In Mira, José; Sandoval, Francisco (eds.). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. Vol. 930. Berlin, Heidelberg: Springer. pp. 566–583. doi:10.1007/3-540-59497-3_224
- ^
I have been contemplating this since having a theoretical discussion about it with Ben Goertzal at the 2017 AGI conference. [I’ve imagined methods such as BCI-to-BCI devices and replicating the thalamic bridge conjoined craniopagus twins share (see Krista and Tatiana).Such a device would have great utility in detecting consciousness levels and perhaps even wellbeing levels in non-human animals. Perhaps it could also work on evaluating whether a robot or computer was conscious. That’d be brilliant, right!?]
- ^
- ^
- ^
Dual-use neuroscience often does not refer to the dangers and creation of artificial consciousness. See https://www.crb.uu.se/forskning/projekt/dual-use-neuroscience/ and is most often concerned with weaponised neurotechnology see https://www.sciencedirect.com/science/article/pii/S0896627317311406#sec1.4
- ^
Substrate-complexity to a similar degree of biological organism where consciousness is first approximated i.e. C.elegans. This might indicate a threshold of organisation of which phenomenologically binding requires.
- ^
- ^
See some complexity measure contenders in this systematic analysis. https://academic.oup.com/nc/advance-article/doi/10.1093/nc/niab023/6359982
- ^
Dynamical Complexity and Causal Density are early attempts to measure substrate complexity https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1001052
- ^
Metzinger, Thomas (2021). “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology”. Journal of Artificial Intelligence and Consciousness. 08: 43–66.
- ^
“You touch on—but don’t fully explore—my reason for disbelief in your paper, namely the phenomenal binding problem.
No binding = no mind = no suffering.
I speak as someone who takes consciousness fundamentalism seriously: https://www.hedweb.com/quora/2015.html#nonmat BUT there is no easy physicalist road from consciousness fundamentalism / non-materialist physicalism to digital minds. Thus replace the discrete, decohered 1s and 0s of a classical Turing machine with discrete, decohered micro-pixels of experience. Run the program. Irrespective of speed of execution or complexity of the code, the upshot isn’t a mind, i.e., a unified subject of experience. Even the slightest degree of phenomenal binding would amount to a hardware error: the program would malfunction.In our fundamentally quantum world, decoherence both:
(1) makes otherwise physically impossible implementations of abstract classical Turing machines physically possible; and
(2) guarantees that they are mindless—at most micro-experiential zombies. In other words, if monistic physicalism is true, our machines can never wake up: their insentience is architecturally hardwired.
Disbelief in digital sentience is sometimes dismissed as “carbon chauvinism”. But not so. A classical Turing machine / connectionist system / LLM can be implemented in carbon as well as silicon (etc). They’d still be zombies. IMO, classical information processors simply have the wrong kind of architecture to support minds. So how do biological minds do it?. I speculate: https://www.hedweb.com/quora/2015.html#quantummind Most likely I’m wrong.” ~ David Pearce
- ^
I guess at worst there could be infohazards in this post or ones like it that somehow contribute to AC creation. I don’t think there is but I did share a couple of papers that are of concern 😬. And the other infohazard is creating new anxieties for other people they didn’t need to have. Perhaps I need to write a clear disclaimer at the start of this post?
- ^
Are you concerned how reverse engineering consciousness could lead to digital suffering? Would you consider your research dual-use in the ways discussed? Do you care about solving consciousness and to what degree?
- ^
Artificial consciousness: Utopia or real possibility?
Buttazzo, Giorgio, July 2001, Computer, ISSN 0018-9162
Sorry if I missed it, but is there some part of this post where you suggest specific concrete interventions / actions that you think would be helpful?
The main goal was to argue for preventing AC. The main intervention discussed was to prevent AC through research and development monitoring. It will likely require the implementation of protocols and labels of certain kinds of consciousness and neurophysics research as DURC or components of concern. I think a close analogue is the biothreat screening projects (
IBBIS
,SecureDNA
) but it’s unclear how a similar project would be implemented for AC “threats”.By suggesting a call for Artificial Consciousness Safety I am expressing that I don’t think we know any concrete actions that will definitely help and if the need is there (for ACS) we should pursue research to develop interventions. Just like in AI safety no one really knows how to make AI safe. Because I think AC will not be safe and that the risk may not outweigh the benefits, we could seriously pursue strategies that make this common knowledge so things like researchers unintentionally contributing to its creation don’t happen. We may have a significant chance to act before it becomes well known that AC might be possible or profitable. Unlike the runaway effects of AI companies now, we can still prevent the AC economy from even starting.
Mark Solms thinks he understands how to make artificial consciousness (I think everything he says on the topic is wrong), and his book Hidden Spring has an interesting discussion (in chapter 12) on the “oh jeez now what” question. I mostly disagree with what he says about that too, but I find it to be an interesting case-study of someone grappling with the question.
In short, he suggests turning off the sentient machine, then registering a patent for making conscious machines, and assigning that patent to a nonprofit like maybe Future of Life Institute, and then
He also has a strongly-worded defense of his figuring out how consciousness works and publishing it, on the grounds that if he didn’t, someone else would.
Thanks for this book suggestion, it does seem like an interesting case study.
I’m quite sceptical any one person could reverse engineer consciousness and I don’t buy that it’s good reasoning to go ahead with publication simply because someone else might. I’ll have to look into Solms and return to this.
May I ask, what is your position on creating artificial consciousness?
Do you see digital suffering as a risk? If so, should we be careful to avoid creating AC?
I think the word “we” is hiding a lot of complexity here—like saying “should we decommission all the world’s nuclear weapons?” Well, that sounds nice, but how exactly? If I could wave a magic wand and nobody ever builds conscious AIs, I would think seriously about it, although I don’t know what I would decide—it depends on details I think. Back in the real world, I think that we’re eventually going to get conscious AIs whether that’s a good idea or not. There are surely interventions that will buy time until that happens, but preventing it forever and ever seems infeasible to me. Scientific knowledge tends to get out and accumulate, sooner or later, IMO. “Forever” is a very very long time.
The last time I wrote about my opinions is here.
Yes. The main way I think about that is: I think eventually AIs will be in charge, so the goal is to wind up with AIs that tend to be nice to other AIs. This challenge is somewhat related to the challenge of winding up with AIs that are nice to humans. So preventing digital suffering winds up closely entangled with the alignment problem, which is my area of research. That’s not in itself a reason for optimism, of course.
We might also get a “singleton” world where there is effectively one and only one powerful AI in the world (or many copies of the same AI pursuing the same goals) which would alleviate some or maybe all of that concern. I currently think an eventual “singleton” world is very likely, although I seem to be very much in the minority on that.