This isn’t just a rhetorical question—it’s deeply personal. Long before I developed a deeper interest in topics like consciousness, intelligence, morality, and ethics, I was already driven by a natural empathy for all beings, human and non-human. So, this article isn’t going to be about rejecting our moral responsibility, but rather questioning whether our frameworks are fit to judge entities so fundamentally different from us.
If the answer to my earlier question is something like, “We are human beings. It’s a moral responsibility to treat all other species and potential entities with fairness,” then I’m already on board. I, too, am a human being who cares deeply about treating others with respect and kindness. But the real issue here is the very framework of thinking that places us as the judge and jury over other species—whether biological, elemental, or artificial. Judging everything through human-centric lenses—whether it’s about consciousness, intelligence, or sentience—is a fundamentally flawed approach.
It’s one grand act of speciesism we’re practicing without even realizing it.
And I don’t blame people for this mindset. After all, given our current understanding of intelligence, consciousness, and morality, it’s only natural to believe that it’s our responsibility to decide how to treat other entities. But let’s ask the question again: Who do we think we are?
We are merely what we define ourselves to be: “human beings,” a species with a certain level of intelligence and consciousness, existing on a planet we call Earth—just one among many entities in a universe that is infinitely vast and indifferent to our existence. Yet, we continue to place ourselves at the center of every moral and ethical question, assuming that our definitions of right and wrong, natural and unnatural, are universal.
But I think this viewpoint is fundamentally wrong, irrelevant, and—especially in the age of AI—dangerous for our own survival. To think it’s our duty to decide the moral status of non-human entities using our own limited definitions of morality and ethics is not only arrogant but also potentially disastrous.
By this point, I’m sure you see where I’m heading. But let me stress this further because, despite being obvious, it’s a subtle truth that most of us either fail or refuse to see. Everything we think about the moral status of digital minds is valid only if we assume that our current understanding of reality is accurate. We may act on these beliefs, and perhaps we’ll be right—at least until the day we realize we were wrong and it’s too late to change course.
Because no matter how profound or well-intentioned our beliefs are, they’re still likely to be completely wrong when compared to the sheer scale and complexity of the universe. To grasp this fact, we need to zoom out of our human-centric view and look at ourselves from a cosmic perspective, as just one of countless entities in the universe.
Viewed this way, humans are just another particle in the grand scheme of existence, foolishly assuming we’re the only ones with intelligence, consciousness, and a moral compass. If we vanished tomorrow, the universe wouldn’t notice. It doesn’t care. It never has.
And what about viewing ourselves from the perspective of other entities? Maybe some have intelligence similar to ours, or maybe they don’t. But what if they have completely different forms of intelligence, experiencing life through senses and dimensions we can’t comprehend? What if, by our definitions, their level of consciousness is far superior?
Imagine, for a moment, that the way we treat our pet dogs and cats is actually a result of their own successful evolutionary manipulation. Imagine ants and termites laughing at our grand architectural achievements every time they build a new mound or nest. What if these beings view us as simple, amusing creatures?
Again, based on all the scientific evidence we have, it may seem like non-human entities possess only a lower level of intelligence and consciousness. And yes, it may seem “right” and “moral” to ensure their well-being. But now, with the creation of advanced AIs and the potential for even more powerful digital minds, we’re facing a completely different reality.
Consider how our considerations expanded over time: We began with our pets, then extended our moral framework to other animals. Eventually, we considered even the well-being of plants, insects, and ecosystems. Now, we’re debating how to apply these concepts to digital entities like AI Chatbots and beyond.
But no matter how well-meaning we are, the core reason we think this way is rooted in an unspoken belief that these entities will always (have to) be less than us. If, one day, fish, beetles, or even plants evolve—through natural processes or with the aid of AI—to the same level of consciousness and intelligence as us, what then? What if they have completely different ethics and purposes that contradict ours?
Imagine a world where fish or plants view growth and consumption as their highest moral calling and see humans merely as food. Would our moral debates matter then? Would they even care?
I don’t think I need to delve into the complexities of digital minds any further at this point. I may sound extreme, but I still consider myself a humane human being. This isn’t about dismissing morality—it’s about recognizing the limits of our perspective.
The universe, after all, is indifferent to whether we thrive or self-destruct. If we truly want to cohabit with new forms of intelligence—whether biological or digital—we must expand our moral imagination beyond what we know and embrace a humbler role as just one of countless entities seeking meaning.
Only by adopting this mindset can we create fair solutions for these emerging entities and, ultimately, discover how to preserve and extend our fragile existence in this vast, indifferent cosmos.
Debating AI’s Moral Status: The Most Humane and Silliest Thing Humans Do(?)
A few months ago, I read 2017 Report on Consciousness and Moral Patienthood. More recently, I came across this paper titled AI alignment vs AI ethical treatment: Ten challenges. And just yesterday, I found 80,000 Hours’ article, Understanding the Moral Status of Digital Minds. Given this ongoing discourse, it seems like the perfect moment to pose a simple but critical question to everyone:
This isn’t just a rhetorical question—it’s deeply personal. Long before I developed a deeper interest in topics like consciousness, intelligence, morality, and ethics, I was already driven by a natural empathy for all beings, human and non-human. So, this article isn’t going to be about rejecting our moral responsibility, but rather questioning whether our frameworks are fit to judge entities so fundamentally different from us.
If the answer to my earlier question is something like, “We are human beings. It’s a moral responsibility to treat all other species and potential entities with fairness,” then I’m already on board. I, too, am a human being who cares deeply about treating others with respect and kindness. But the real issue here is the very framework of thinking that places us as the judge and jury over other species—whether biological, elemental, or artificial. Judging everything through human-centric lenses—whether it’s about consciousness, intelligence, or sentience—is a fundamentally flawed approach.
It’s one grand act of speciesism we’re practicing without even realizing it.
And I don’t blame people for this mindset. After all, given our current understanding of intelligence, consciousness, and morality, it’s only natural to believe that it’s our responsibility to decide how to treat other entities. But let’s ask the question again: Who do we think we are?
We are merely what we define ourselves to be: “human beings,” a species with a certain level of intelligence and consciousness, existing on a planet we call Earth—just one among many entities in a universe that is infinitely vast and indifferent to our existence. Yet, we continue to place ourselves at the center of every moral and ethical question, assuming that our definitions of right and wrong, natural and unnatural, are universal.
But I think this viewpoint is fundamentally wrong, irrelevant, and—especially in the age of AI—dangerous for our own survival. To think it’s our duty to decide the moral status of non-human entities using our own limited definitions of morality and ethics is not only arrogant but also potentially disastrous.
By this point, I’m sure you see where I’m heading. But let me stress this further because, despite being obvious, it’s a subtle truth that most of us either fail or refuse to see. Everything we think about the moral status of digital minds is valid only if we assume that our current understanding of reality is accurate. We may act on these beliefs, and perhaps we’ll be right—at least until the day we realize we were wrong and it’s too late to change course.
Because no matter how profound or well-intentioned our beliefs are, they’re still likely to be completely wrong when compared to the sheer scale and complexity of the universe. To grasp this fact, we need to zoom out of our human-centric view and look at ourselves from a cosmic perspective, as just one of countless entities in the universe.
Viewed this way, humans are just another particle in the grand scheme of existence, foolishly assuming we’re the only ones with intelligence, consciousness, and a moral compass. If we vanished tomorrow, the universe wouldn’t notice. It doesn’t care. It never has.
And what about viewing ourselves from the perspective of other entities? Maybe some have intelligence similar to ours, or maybe they don’t. But what if they have completely different forms of intelligence, experiencing life through senses and dimensions we can’t comprehend? What if, by our definitions, their level of consciousness is far superior?
Again, based on all the scientific evidence we have, it may seem like non-human entities possess only a lower level of intelligence and consciousness. And yes, it may seem “right” and “moral” to ensure their well-being. But now, with the creation of advanced AIs and the potential for even more powerful digital minds, we’re facing a completely different reality.
Consider how our considerations expanded over time: We began with our pets, then extended our moral framework to other animals. Eventually, we considered even the well-being of plants, insects, and ecosystems. Now, we’re debating how to apply these concepts to digital entities like AI Chatbots and beyond.
But no matter how well-meaning we are, the core reason we think this way is rooted in an unspoken belief that these entities will always (have to) be less than us. If, one day, fish, beetles, or even plants evolve—through natural processes or with the aid of AI—to the same level of consciousness and intelligence as us, what then? What if they have completely different ethics and purposes that contradict ours?
I don’t think I need to delve into the complexities of digital minds any further at this point. I may sound extreme, but I still consider myself a humane human being. This isn’t about dismissing morality—it’s about recognizing the limits of our perspective.
The universe, after all, is indifferent to whether we thrive or self-destruct. If we truly want to cohabit with new forms of intelligence—whether biological or digital—we must expand our moral imagination beyond what we know and embrace a humbler role as just one of countless entities seeking meaning.
Only by adopting this mindset can we create fair solutions for these emerging entities and, ultimately, discover how to preserve and extend our fragile existence in this vast, indifferent cosmos.