The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don’t think it’s crazy to conclude “given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%.”
That said, the moral implications are still quite wild. To paraphrase Parfit, “research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today].” The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it’s already on the list sounds like a reductio.
A hazy guess is that something like “respecting boundaries” is a missing component here? Maybe there is something wrong with messing around with a water computer that’s instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.
The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don’t think it’s crazy to conclude “given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%.”
That said, the moral implications are still quite wild. To paraphrase Parfit, “research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today].” The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it’s already on the list sounds like a reductio.
A hazy guess is that something like “respecting boundaries” is a missing component here? Maybe there is something wrong with messing around with a water computer that’s instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.