# AidanGoth comments on Reducing existential risks or wild animal suffering?

• I’m not en­tirely sure what you mean by ‘rigidity’, but if it’s some­thing like ‘hav­ing strong re­quire­ments on crit­i­cal lev­els’, then I don’t think my ar­gu­ment is very rigid at all. I’m al­low­ing for agents to choose a wide range of crit­i­cal lev­els. The point is though, that given the well-be­ing of all agents and crit­i­cal lev­els of all agents ex­cept one, there is a unique crit­i­cal level that the last agent has to choose, if they want to avoid the sadis­tic re­pug­nant con­clu­sion (or some­thing very similar). At any point in my ar­gu­ment, feel free to let agents choose a differ­ent crit­i­cal level to the one I have sug­gested, but note that do­ing so leaves you open to the sadis­tic re­pug­nant con­clu­sion. That is, I have sug­gested the crit­i­cal lev­els that agents would choose, given the same choice set and given that they have prefer­ences to avoid the sadis­tic re­pug­nant con­clu­sion.

Sure, if k is very low, you can claim that A is bet­ter than Bq, even if q is re­ally re­ally big. But, keep­ing q fixed, there’s a k (e.g. 10^10^10) such that Bq is bet­ter than A (feel free to deny this, but then your the­ory is lex­i­cal). Then at some point (as­sum­ing some­thing like the con­ti­nu­ity), there’s a k such that A and Bq are equally good. Call this k’. If k’ is very low, then you get the sadis­tic re­pug­nant con­clu­sion. If k’ is very high, you face the same prob­lems as lex­i­cal the­o­ries. If k’ not too high or low, you strike a com­pro­mise that makes the con­clu­sions of each less bad, but you face both of them, so it’s not clear this is prefer­able. I should note that I thought of and wrote up my ar­gu­ment fairly quickly and quite late last night, so it could be wrong and is worth check­ing care­fully, but I don’t see how what you’ve said so far re­futes it.

My ear­lier points re­late to the strangeness of the choice set de­pen­dence of rel­a­tive util­ity. We agree that well-be­ing should be choice set in­de­pen­dent. But by let­ting the crit­i­cal level be choice set de­pen­dent, you make rel­a­tive util­ity choice set de­pen­dent. I guess you’re OK with that, but I find that un­de­sir­able.

• I hon­estly don’t see yet how set­ting a high crit­i­cal level to avoid the re­pug­nant sadis­tic con­clu­sion would au­to­mat­i­cally re­sult in counter-in­tu­itive prob­lems with lex­i­cal­ity of a quasi-nega­tive util­i­tar­i­anism. Why would strik­ing a com­pro­mise be less prefer­able than go­ing all the way to a sadis­tic con­clu­sion? (for me your ex­am­ple and calcu­la­tions are still un­clear: what is the choice set? What is the dis­tri­bu­tion of util­ities in each pos­si­ble situ­a­tion?)

With rigidity I in­deed mean hav­ing strong re­quire­ments on crit­i­cal lev­els. Allow­ing to choose crit­i­cal lev­els de­pen­dent on the choice set is an ex­am­ple that in­tro­duces much more flex­i­bil­ity. But again, I’ll leave it up to ev­ery­one to de­cide for them­selves how rigidly they pre­fer to choose their own crit­i­cal lev­els. If you find the choice set de­pen­dence of crit­i­cal lev­els and rel­a­tive util­ities un­de­sir­able, you are al­lowed to pick your crit­i­cal level in­de­pen­dently from the choice set. That’s fine, but we should ac­cept the free­dom of oth­ers not to do so.