Kardashev for Kindness

Abstract

This is an argument for implementing a set list of long-term goals for humanity’s development, beyond an energy scale: we should use ethics, not technology alone, to measure how advanced our civilization could become.

Disclaimer: this article was inspired by content from an exurb1a video. I was hesitant to use this material because of a troubling story that has come to light about the creator exurb1a, but felt it could make for a valuable conversation in the EA sphere. If you’re curious, you can find that story here.

Section 1:

Nice to Meet You, Nikolai

You might be already familiar with the Kardashev Scale and its way of measuring a civilization’s level of advancement based on how much energy it’s able to use, divided into three types. A Type I civilization has harnessed all available energy on its planet, a Type II has done this with all available energy in its star and planetary system (hello, Dyson Sphere!), and a Type III has mastered all available energy in its local galaxy.

Nikolai Kardashev, the man behind the concept, was a Soviet radio astronomer tasked with sending out cosmic signals in the hope of finding or communicating with extraterrestrial life. He caused one of the most culturally influential false alarms about alien existence when he proposed that the newly discovered CTA-102, a blazar-type quasar, provided evidence of a Type II civilization attempting to communicate. This caused a sensational impact when a public announcement was made regarding the possibility— The Byrds even made a song about it in 1967. However, there’s something I feel should have been more strongly considered. What would a highly advanced civilization, one with mastery over an entire planetary system, think about us and our ethical atrocities? If we advance to a Type I, will we still be allowing our own to die when we could prevent it, or causing horrific amounts of suffering in factory farms? Will this other civilization be equally cruel, even worse, or view our inadequacies with shock and disgust? And most notably for this particular topic, on what other metric could we measure how “civilized” our civilization has become? In the exurb1a video mentioned above, a different scale is proposed: the Kardashev scale of wisdom. To make it more EA-oriented, I like to view it as a kinder way to measure our own advancements.

Section 2:

Kardashev for Kindness

Type I

In a Type I Wisdom civilization, no one goes hungry, no one dies from curable diseases, and everybody has a universal baseline in which their basic needs are met. Humanity no longer exploits the suffering of non-human animals for unnecessary pleasure. (exurb1a placed concern for non-human animals under a Type II civilization, but I chose to weigh this with more immediate urgency.)

Type II

In a Type II Wisdom civilization, no human involuntarily has to experience physical or mental maladies like mental illness or chronic pain; extreme human suffering that cannot be consented to (see Vinding and Tomasik) is wiped out entirely. Unnecessary non-human suffering is also largely eliminated, including unnecessary wildlife suffering— for example, 999 out of every 1,000 baby sea turtles wouldn’t have to die horribly from exposure and predation.

Type III

In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI.

Section 3:

So, why does this matter?

In Suffering-Focused Ethics: Defense and Implications, Magnus Vinding makes a tour de force argument for why we should care about the elimination of suffering. I won’t try to belabor it too much here, beyond restating one of the main points: there is a severe asymmetry between pleasure and suffering, and the mere existence of extreme suffering calls out for redress. This isn’t too hard an argument to make here on the EA forums. If there’s one thing we have in common, it’s a strong capacity for actually caring; caring strongly about real-world impact and making humanity just a bit more morally conscious than it is now.

When we fix our eyes on the future, we have well established concepts like the Kardashev Scale waiting for us, time-honored milestones to measure how far we’ve come in terms of scientific progress. Yet when it comes to our ethical goals for the future, all we’ve got is a sort of blur of good intentions. Maybe one day we’ll stop torturing and killing animals, suffering from diseases, starving to death, and shooting missiles at each other. Establishing set goals and measuring our progress by them could be key to actually ensuring that we advance towards a future that isn’t just impressive on a typical Kardashev scale, but one that is kind, compassionate, and free of unnecessary suffering.