Ok, so the crux of my question was not understanding that non-preference utilitarianism exists, although now I’m even more confused, as I explained in my reply to HjalmarWijk. You also seem to be coming from the assumption that suffering (and I assume pleasure) exists separately from an agent achieving it’s goals, so I’m curious to hear your thoughts on how you define them?
>So for me there isn’t really a paradox to resolve when it comes to propositions like ‘the best future is one where an enormous number of highly efficient AGIs are experiencing as much joy as cybernetically possible, meat is inefficient at generating utility’.
Does this mean that you can agree with such proposition?
Ok, so the crux of my question was not understanding that non-preference utilitarianism exists, although now I’m even more confused, as I explained in my reply to HjalmarWijk. You also seem to be coming from the assumption that suffering (and I assume pleasure) exists separately from an agent achieving it’s goals, so I’m curious to hear your thoughts on how you define them?
>So for me there isn’t really a paradox to resolve when it comes to propositions like ‘the best future is one where an enormous number of highly efficient AGIs are experiencing as much joy as cybernetically possible, meat is inefficient at generating utility’.
Does this mean that you can agree with such proposition?