Iām a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iām also on LessWrong and have a Substack blog.
Thanks for the comment!
Iāve always heard āpinpricks vs tortureā or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. Iāve heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.
I think itās hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:
If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
Iāve heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, weād want to go all-in on suffering reduction, which would probably mean prioritizing animals.
I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.