(Also, this is very tangential to the main thread, but I spotted in your footnote:
I think being a simulation should increase your credence in theories where only you and perhaps those close to you are actually conscious, and others are low-fidelity constructs which have much less expected moral importance.
This is likely incorrect, I believe, for the reasons Bostrom gives:
In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.
Nonetheless, I do agree with the other point you make:
If we knew for certain that our simulators would turn us off once the hinge of history was over, I think our priority would shift to reducing as much present-day suffering as possible, which I think in practice means prioritizing animal welfare.
(Also, this is very tangential to the main thread, but I spotted in your footnote:
This is likely incorrect, I believe, for the reasons Bostrom gives:
Nonetheless, I do agree with the other point you make:
)