The published post is a language quality polished writing I made with assistance of ChatGPT. But I acknowledge that I fully agree with and it reflects the ideas and message I wanted to convey with my original writing. Below is my original writing. And this is the changes/polishes I made with ChatGPT: https://chatgpt.com/share/66f8e0cb-b224-800a-9808-f167b83447c7
Humans considering whether AIs are worth moral status or not is both one of the most humane and silliest thing humans do
I read the 2017 Report on Moral Patienthood by Luke Maurenhauer a few months ago. I encountered this paper titled AI alignment vs AI ethical treatment: Ten challenges. recently. And yesterday, 80000 hours published the article “Understanding the moral status of digital minds”.
So I think now is the right time to ask all those people, and all of those in humanity in general, who are wondering whether AIs are worth the moral rights, this very simple question,
“Who the hell you think you are?”.
This is both a literal and practical question.
Before I continue further, let me tell you just a little bit about myself. Since long before (I mean at least two decades ago) I would have became well aware of human rights, animal rights and all sorts of things (or let’s say became more mature/humane human being), I am someone who would instantly/naturally apologize (and did apologize) a sleeping stray dog for accidentally waking him/her up because I tripped near it. I am someone who would ask (and did ask), instead of fighting back/trying to protect myself, “Why are you doing this to me” first when someone would unexpectedly run to me and punch me in the face.
So, if the answer will be something like “We are human beings. Just one of the species on this planet earth. It as a human thing, It is a fundamental/moral/ethical thing as a species to treat and try our best to find out ways to treat all other species and potential species equally.” I am totally and already onboard. Because I am a human being too!
However, this framework of thinking of us or just simply assuming that it is a natural action for human beings to look at other species or just anything all else non-human, biological, natural (like sand, water, dark matter), all man-made things including anything AI we are referring to for this topic, in these lens — like whether ‘they’ should be treated equally by ‘us’, whether ‘they’ have consciousness, intelligence, sentience, etc., is a fundamentally wrong way of thinking, way of looking at things equally.
It is one grand act of speciesism we are doing without realizing ourself that we are to other non-human entities on “this planet”.
And I don’t blame these people and myself for having been having such opinions. Because, as far and much as we have understood about ourselves as human species and anything else on this plant and in this universe, and based on our own ‘definition(s)’ of being human, our (limited/incomplete) understanding of intelligence, consciousness, etc. and our ‘definitions’ of morality, ethics, rights and all that, it is of course a natural, moral, let’s just say it’s a ‘good’ thing to have concerns for others.
But, let’s now ask the question again, “Who the hel do we think we are!?”.
We are just what we define ourselves as “human beings”, what we define or think ourselves that a “species” who happen to have (this and that level of) “intelligence”, “consciousness”, (now you know the drill) that happen to exist on this “planet” (which is also a thing that happen to exist in this “universe”) among all other species and things that happen to exist (and got created to be in existence by us) on the same plant.
So it is fundamentally wrong, irrelevant, and most importantly in this age of AI — very dangerous for our own existence — to think it is a natural thing, a right thing, a moral thing, our responsibility for us to look at everything else (well, not every thing. But I’m sure you get my point) on this planet in our own “definitions” of natural/unnatural, right/wrong, moral/ethical and all that shit.
At this point, I’m sure you all know where I am going with this. But let me add just a bit more, because I don’t want to sound like a radical existentialist or survivalist or anything like that, but at the same time I believe I need to stress my points firmly because although they seems obvious, they are still too subtle and sensitive for most of us to (want to or be able to) realize, let alone act on.
Everything we are thinking of about moral status of digital minds is valid if everything happening now in this world and believing about ourselves are true as we are believing they are. We may and can continue to believe so and act on according to this belief, and it may turn out we are right (until the point when we will realize that we were wrong and at that point it will be too late) that we considered and acted accordingly on all the benefits and challenges of understanding the moral status of digital minds and everything else non-human.
Because, everything we think and believe we know about ourselves and everything else, no matter how profound and profoundly right, is still very limited and is still very likely to totally wrong when we will compare ourselves and our knowledge(for the lack of better word) to the existence (both time and scale) of the universe. To be able to see this fact clearly, we need to literally zoom out of the earth and look at ourselves and everything else from outer space (and also probably from the beginning of our existence as the human species).
When we look at ourselves from that angle, we will see clearly that we are just one of the entities which happen to be exist in this universe, or on this planet to be specific. And human beings are just one of the many entities on this planet we “defined as the Earth” who happen to have what we ‘defined’ and ‘measured’ having such level of “intelligence”, “consciousness” and such. And based on such perspectives and definitions, we happen to believe that other entities on this planet have/ will/ may have/ should have/ deserve/ should deserve different levels of or no “intelligence”, “consciousness” and “moral righteousness” and all that.
But did we ever thought, look at us from the perspective of the universe?
From the perspective of the universe I believe, or rather its just a fact that, we are just some particles who are foolishly thinking we are intelligent, and conscious and.. worrying about other entities. If and when we no longer exist because of any reasons, the universe won’t care. The universe doesn’t care. The universe never cares.
Did we ever look at us from the perspectives of those other entities on the planet?
Yes, may be, some of them do have same or similar form and different (as of now) lower level of intelligence and consciousness as we are. If and because they do, yes and may be, some or all of them deserve to be treated equally or whichever way we believe should be treated.
But, what if they Don’t have consciousness and intelligence we think they have.
Whether they do or not, what if they actually don’t want to be treated by the way we think they deserve?
More importantly, what if they have completely different form of consciousness and intelligence than ours, and hence they have always been enjoying their lives and having their own definitions of ‘life’, ’pleasure, ’ethnics’, ‘moral patienthood’ and all that.
And What if the type of consciousness and intelligence they have is actually higher than ours?
Imagine the way we are treating our pet cats and dogs is actually the result of one of their greatest achievements in their psychological warfare in their evolutionary timeline. Imagine ants and termites looking at our greatest architectural buildings or whatever and laughing at us everyday when they walk by us on the tree branches.
Imagine, any creative example you, the reader, very intelligent human, can think of for this viewpoint.
Again, yes, according to very reliable information and understandings we have gained so far (I just didn’t want to say “according to all the scientific evidence we have so far”, it is certain or very possible that those other entities on this planet we think they have consciousness and intelligence doesn’t have the same or only very lower level of consciousness and intelligence than we have. And it is ‘right’, ‘moral’ for us human beings to consider and act as much as we can for their well being or anything we have been talking, doing about such topic.
But now, with the creation of current state of AI, and potential to be able to create even more powerful digital minds or AGI (for the lack of better words to refer to everything we want to refer to for this topic) and potential to be able to do other things* with/because of them, this activity of us human beings considering whether non-human entities are worth mortal concern, and doing things to act on the result of that question has become even more dangerous activity for the survival of us human beings.
Here are a few examples/explanations why.
Let’s say we started this line of thinking with our pets. Then, we realized it’s an act of speciesism and expanded our consideration and actions to other animals (even if we are going to be eating some of them anyway). And then, we expanded our expedition to even more kinds of animals, insects and (living) things in the nature which we never thought they are who we think they are and would consider for such considerations. Finally, we are now looking at ChatGPT and its friends or Digital Minds. I wish I can start talking about them now, but let me stick with our less intelligent/conscious evolutionary friends just a little more.
No matter how well-intentioned we are, I believe the main reason, but we don’t realize it now, we are believing and doing such good efforts and treatment on our evolutionary friends is actually because we are believing we know that they are and will always be ‘less intelligent and conscious’ than we are.
If — evolutionarily or with the help of AI we created — all the fish, dung beetle, our cats (or dogs), and even plants and tress will become as intelligent and conscious we are and if all of them will be negotiating (or fighting) with us for the share of space and resources on this planet; or even worse (let me refer to one of my thoughts/ideas above) — when they have evolved to such level and their intelligence and consciousness and ethics and ‘purpose of existence’ are completely different from ours, and if they would simply (be able to) eat all of us because eating and growing and dying (and not being afraid of dying) is the most conscious thing they do,
What will we do? And I am pretty sure our considerations about them now will be pretty different.
Well actually, at this point, I think I don’t even need to give examples for Digital minds any more.
I know my thoughts may seem quite extreme. But please don’t forget what I shared you earlier about myself. I believe I am just one of the human human beings.
The point I am trying to make with article is that if we are going to try to understand and consider what to do for the moral status of digital minds and everything else non-human on this planet and in this universe, we will have to change this approach of/shift this perspective of looking at things from the angel of us being human beings as we define ourselves, to us being just one of the entities in the universe (who happen to have or believing/assuming to have intelligence and consciousness). Only by this perspective will we be able to come up with practical solutions to fairly deal with such challenges that will be imposed by such beings and at the same time be able to come up with solutions to preserve and extend our existence in this universe as an entity.
[This reply is written completely by me. No ChatGPT involved.]
Firstly, thank you for taking time to comment!
Secondly, I am really struggling now to decide which of what I want to say should come first for “Secondly”. Let me just take a risk. So, here it comes..
Everything that comes next, no matter how soft, strong, weird or anything they sound in terms of language/meaning, please interpret them with a degree of care and kindness (I’m sure you would) — including this sentence.
Although I feel quite certain that I wanted to let out my ideas, opinions I shared in my post, I was not completely certain how they should/would sound in the readers interpretation, especially in terms of English language, even though I said polished it with ChatGPT and said that “I acknowledge that I fully agree with it”.
I don’t want to sound/appear apologetic, defensive, unconfident, seeking empathy/pity for what I shared in the next sentences, but I think replying to you with these messages would just more likely help flourish your current interpretation of my post and even facilitate further discussion on the core ideas, messages presented in my post.
The post was only my very third time sharing such big bold (to my standards) opinions to English-speaking, intellectual/professional communities like EA Forum.
I am from completely different (or far) educational, professional, social, geographical background when it comes to topics like AI, consciousness, and science in general, and participating in such communities.
And I’m sure you already noticed, English is not my first language. I have been using English language in ‘professional settings’ (If you want, I can provide more info for what I mean by this) for over a decade, but not continuously, and definitely not yet on a community like this.
I think what I am trying say here is something like my ability to use and understand English language is not exactly/fully calibrated with my heartfelt intention to express my imaginations, ideas, feelings and have discussions about them in the way I want.
About two years ago, I encountered profound changes in my life. Among all the good and bad things that resulted, I have found exploring about consciousness, human existence, AI (I know it’s too general to just say AI, but let’s keep it short in the this comment) very exciting and have been trying to figure out If I should and would be able to explore even further and more practically about those topics. And by participating in communities like EA forum, I hope I will know more what to do next.