The Ethical Basilisk Thought Experiment

A couple of years ago a thought experiment occurred to me, following spending some time seeing how well Effective Altruism could be baked into a system aimed at ethics. A year ago I put that experiment to paper, and this year I finally decided to share it. See the full text available here: http://​​dx.doi.org/​​10.13140/​​RG.2.2.26522.62407

The paper discusses a few critical points for calculating ethical value, positive and negative, including an edge case that some members of this community have been unwittingly sitting in the middle of. No one has yet refuted even a single point made in the paper, though several have pointed to portions of it being emotionally unappealing. Some discomfort is to be expected, as reality offers no sugar-coating.

I’m sharing it now to see if the EA community fairs any better than the average person when it comes to ethics, or if it is perhaps driven more by emotional phenomena than strictly ethical motives as cognitive bias reseach could imply. One of the wealthiest individuals in this community has failed already, after investing repeatedly in AI frauds who preyed on this community. The question this post will answer for me is if that event was more likely random, or systemic.

Thank you in advance. I look forward to hearing any feedback.

*Note: Unlike the infamous “Roko’s Basilisk”, it doesn’t matter at all if someone reads it or not. In any scenario where humanity doesn’t go extinct the same principles apply. People remain accountable for their actions, proportionate to the responsibilities they carry, regardless of their beliefs or intentions.