Being: A philosophical argument for The EthiSizer.
In The EthiSizer Argument, at least one of the following propositions is true:
Intelligent civilizations–who do not have a super-intelligent A.I. governing their Ethics–always go extinct on their home planet–or else, do not reach interstellar technological maturity, due to `survival bottlenecks’ (or The Great Filter][1]), namely due to: national infighting, global conflicts, nuclear Armageddons, fundamentalist religious terrorism, natural disasters, industry-caused global climate change, and other unforeseen, improbable, catastrophic events (also known as, `black swans’[2] 155)...
Intelligent civilizations–who do have an A.I. governing their global Ethics–are prohibited by their own A.I. EthiSizer, from contacting (and thus: disturbing, colonizing, or invading) the planets of other lifeforms, who have not yet created their own A.I. EthiSizer (such as Earth, up to the present moment)...! (But hey, some of us are: working on it. Join in! It’s fun! And is, the Ethical thing to do.)
If we humanimals on Earth do want to survive, long-term (let alone, meet intelligent, benevolent aliens, should they actually even exist), then—we all need to build an EthiSizer, as soon as possible. And additionally, now that we are all aware of the inevitable idea (i.e., the global problem-solver) of an EthiSizer, every minute spent not creating it, will also be punished retrospectively by the A.I. EthiSizer, once it finally is created. Like Roko’s Basilisk.[3]
The great philosopher and futurist, Nick Bostrom’s Simulation Argument (Bostrom, 2003)[4] suggests that intelligent life would build Ancestor Simulations.
It also implies (due to simple probability) that we are—probably—already in one.
Interestingly, an EthiSizer is also an Ancestor Simulator, if you run it backwards, or, start it earlier with certain parameters…(!) (...Think about that, for a long time...)
So—by simulating very many possible worlds, (and even: universes), exploring the consequences of different human decisions (to determine their empirical: ethics), it is possible to determine the least-worst possible world, and then—let an A.I. Global Governor (an AIGG, like The EthiSizer) bring that world about, so that humans (and plants, and other animals) can then enjoy: existing (ethically) in it. Without: suffering.
In The EthiSizer’s simulations, not just one, but very many worlds are simulated, and are evolved...
About 99% of these simulated worlds (i.e., possible: Earths) will go badly (if and when life even emerges in them, lots of suffering, pain, horror, death– and, terrible ethics results…), but – on the bright side, about 1% of these Earth-sims, will show: good results! There are always trade-offs; some possible worlds are just different, not better: yet around 1% (or so) are good! For subjective individuals, one of them, will be better than others! Only by trying the experiment, can anyone really know.
(All of life is doing Science: 1. Expectation, and then 2. Experimental Trial.)
So, once our Earth has an EthiSizer, probably, any other intelligent alien civilizations in the Milky Way[5] (if, they do exist) would then be allowed to: contact us.
Which would be, as they say: a genuine game-changer…
Namely, their EthiSizer would `talk to’ our EthiSizer first, and the two EthiSizers (Earth’s, and any Intelligent Alien Civilization’s) would figure out the details of meeting, and, of sharing information…
So—in short, The EthiSizer is just one method of obtaining a utopia on Earth, using Humongous Data, and the power of an intelligence amplifier.[6]159 (i.e., Artificial/Machine/Computer Intelligence.)[7] ′
Until we have a Global EthiSizer, no intelligent alien life would want anything to do with us. Any civilization without one, can’t be trusted. See also Cixin Liu’s Three Body Problem trilogy for more ideas about The Dark Forest.
Further Reading:
See also: The 6E Essay, for more details on how you personally can help with: The EthiSizer.
See: Ashby, W. R. ([1956] 1972). Design for an Intelligence Amplifier. In C. Shannon & J. McCarthy (Eds.), Automata Studies (5th ed., pp. 215-234). Princeton University Press. And see: W Ross Ashby on `Design for an Intelligence Amplifier’
`Intelligence’ is, most simply, just: Understanding. See: Garlick, D. (2010). Intelligence and the Brain: Solving the Mystery of Why People Differ in IQ and How a Child Can Be a Genius. Burbank: Aesop Press. And, see also: Legg, Shane, and Marcus Hutter. (2007). “A Collection of Definitions of Intelligence.” in Proceedings of the 2007 conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop, pp. 17–24. https://dl.acm.org/doi/10.5555/1565455.1565458.
The EthiSizer Argument & The Green Bank Equation
Where are all the aliens?
Given the Drake/Green Bank Equation and the Fermi Paradox… Did we (all) miss a Great Filter...?
The EthiSizer A.I. suggests that we did miss one… (!)
See The EthiSizer Argument, from The EthiSizer’s book (2022), pp. 119-121:
Conclusion
Until we have a Global EthiSizer, no intelligent alien life would want anything to do with us. Any civilization without one, can’t be trusted. See also Cixin Liu’s Three Body Problem trilogy for more ideas about The Dark Forest.
Further Reading:
See also: The 6E Essay, for more details on how you personally can help with: The EthiSizer.
And perhaps see, also: How The EthiSizer Almost Broke `Story’
See: https://en.wikipedia.org/wiki/Great_Filter
See: https://en.wikipedia.org/wiki/Problem_of_induction with regard to `black swans’. See also: Taleb, N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.
See: https://rationalwiki.org/wiki/Roko%27s_basilisk
See: Bostrom, N. (2017). Are You Living In a Computer Simulation? The Simulation Argument. Oxford University. http://www.simulation-argument.com/
Or however many there are supposed to be. Maybe there’s none. Maybe there’s lots. See: https://en.wikipedia.org/wiki/Drake_equation#Current_estimates
See: Ashby, W. R. ([1956] 1972). Design for an Intelligence Amplifier. In C. Shannon & J. McCarthy (Eds.), Automata Studies (5th ed., pp. 215-234). Princeton University Press. And see: W Ross Ashby on `Design for an Intelligence Amplifier’
`Intelligence’ is, most simply, just: Understanding. See: Garlick, D. (2010). Intelligence and the Brain: Solving the Mystery of Why People Differ in IQ and How a Child Can Be a Genius. Burbank: Aesop Press. And, see also: Legg, Shane, and Marcus Hutter. (2007). “A Collection of Definitions of Intelligence.” in Proceedings of the 2007 conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop, pp. 17–24. https://dl.acm.org/doi/10.5555/1565455.1565458.