Here’s my entry: Retrocausal missives from the deep past, vol XII: the menace of the OI Boddhisattva mind.
Aatu Koskensilta
I’m a “gun-to-my-head” negative utilitarian—that is, if I’m pressed, or in case of situations and scenarios where my ordinary pragmatic parsing falls apart, if Omega asks me, so to speak, that’s what I’d recommend. The alternatives are just too ghastly to contemplate. Here I think most of us are simply extremely deluded about how truly bad bad experiences can be; if we had any glimpse of a shadow of a vague idea of just how horrible extreme suffering can be, we’d all be frantically flailing about for the infamous “off button” on reality… On this topic, see: https://qualiacomputing.com/2019/08/10/logarithmic-scales-of-pleasure-and-pain-rating-ranking-and-comparing-peak-experiences-suggest-the-existence-of-long-tails-for-bliss-and-suffering/
There is a considerable body of considered thought on this area, and instead of waxing abstract about utilitarian calculus in philosophy 101 terms, I suggest consulting the work of suffering-focused ethicists, such as David Pearce, Magnus Vinding, Jonathan Leighton, Brian Tomasik, etc. The good folks at Qualia Research Institute are also trying to figure out the ground facts about this all. The nature of valence is an open question, and how any sort of utilitarian ethics will play out depends on the actual details of what value actually is, as an objective feature of reality.
I personally have found negative utilitarianism an oddly cheery ethics. After all, even the faintest hint of any despair or horror at existence is, ceteris paribus, not something that negative utilitarianism would recommend! (Negative utilitarianism might indeed be a self-limiting meme, in that in many cases the prudent negative utilitarian choice is to adopt some more psychologically adaptive explicit ethical system, and there is a strong case for not proselytizing, given the psychological harm these ideas can do at least in unsophisticated form.)
The chapter by Michael D. Wise?
I quickly skimmed it, and perhaps my reading here is uncharitable, but it did not actually seem to say anything substantial about the problem at all, merely offering (in themselves interesting) historical reflections about the problematic nature of conceiving human-non-human animal relationship in terms of property or ownership, and general musings on the chasm that separates us from the lived experience of beings very different from us.
Is there any substantial engagement with the problem of wild animal suffering in the essays in the book?
Are there other sentient beings in the universe in this scenario? Should I take into account the fact that in this scenario something virtually impossible appears to have happened, so I live in a reality where virtually impossible things happen, meaning something is clearly wrong about my ordinary picture of the world?
I think I sort of get what you’re trying to do, but it’s surprisingly difficult to make the thought experiment do that (at least for me)! What happens in my case is that I get caught up in stuff that the scenario would seem to imply—e.g. that virtually impossible things happen so trying to sort out the expected outcomes of decisions is difficult—then sort of remind myself that that’s not the point (this is not some subtle decision theoretic thought experiment!) but then become annoyed because it seems I’d need to just sort of directly work out “what I value about my existence” and transfer that into the hypothetical situation. But then, what do I need the hypothetical situation for in the first place?
Depending on how valence turns out to work, and if there really are no other sentient beings in all of reality, suicide (at least of the evolutionary illusion of a unitary self persisting over time) sounds like a good option: completely untroubled lights-out state for the entirety of the field of consciousness.
But perhaps it would be a good idea to explore the (positive or at least neutral valence regions of the) state space of consciousness (using whatever high-tech equivalents of psychedelia and the consciousness technologies of the contemplative traditions the spaceship has to offer) and see what emerges, provided the technology available on the spaceship allows for this to be done safely. Here the idea is to allow “different states of consciousness” a say in the decision, so we don’t just e.g. end all sentience because we’re in a dark mood (perhaps scared shitless finding ourselves in what seems to be an impossible situation, so we might be pretty paranoid about trusting our epistemology), creating a sort of small community of “different beings”—the dynamics within and among the different states of consciousness—collectively figuring out what to do. I would not be at all surprised if peaceful cessation of all sentience was ultimately the decision, but if there are no other sentient beings, and I’m not myself suffering particularly intensely, making sure this is the right thing to do would also seem prudent.
But again, a reality where an apparently evolved world simulation pops into existence with no past causal history would seem to be radically different from the one we appear to live in, so again, it’s difficult to say!
For those who liked the story, after the following additional short missive,
I can now present The Bodhisattva of Friendliness as the Green-Haired Hacker Girl as the Bodhisattva of Friendliness vol I, an aesthetic exploration., set in the same universe.
This is just a quick proof of concept. I hope to soon have something a bit more polished.
ETA: just realized I linked to a huge PDF that triggers a warning it can’t be checked for viruses. So for those who understandably might be reluctant to download it and see what it contains, this FB post provides a sample: https://www.facebook.com/aatu.koskensilta/posts/pfbid02zLuqsiU3HJW2ix9VPcpVwsSLgWhXUcDuv6EhMSnZFYZLYnhimrdHRsK9mZXYdQBql
(It’s a sort of picture book for sentient beings of all ages, and also a transcript of executing a sort of search strategy in the space of possible aesthetics with the aid of ChatGPT+ and DALL-E 3, with a very specific (and hopefully transparent) purpose in mind.)