Thanks! Let me write them as a loss function in python (ha)
For real though:
Some flavor of hedonic utilitarianism
I guess I should say I have moral uncertainty (which I endorse as a thing) but eh Iām pretty convinced
Longtermism as explicitly defined is true
Donāt necessarily endorse the cluster of beliefs that tend to come along for the ride though
āSuffering focused total utilitarianā is the annoying phrase I made up for myself
I think many (most?) self-described total utilitarians give too little consideration/āweight to suffering, and I donāt think it really matters (if thereās a fact of the matter) whether this is because of empirical or moral beliefs
Maybe my most substantive deviation from the default TU package is the following (defended here):
āUnder a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing.ā
Moral realism for basically all the reasons described by Rawlette on 80k but I donāt think this really matters after conditioning on normative ethical beliefs
Nothing besides valenced qualia/āhedonic tone has intrinsic value
I think that might literally be itāeverything else is contingent!
This is a really interesting idea! What are your values, so I can make an informed decision?
Thanks! Let me write them as a loss function in python (ha)
For real though:
Some flavor of hedonic utilitarianism
I guess I should say I have moral uncertainty (which I endorse as a thing) but eh Iām pretty convinced
Longtermism as explicitly defined is true
Donāt necessarily endorse the cluster of beliefs that tend to come along for the ride though
āSuffering focused total utilitarianā is the annoying phrase I made up for myself
I think many (most?) self-described total utilitarians give too little consideration/āweight to suffering, and I donāt think it really matters (if thereās a fact of the matter) whether this is because of empirical or moral beliefs
Maybe my most substantive deviation from the default TU package is the following (defended here):
āUnder a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing.ā
Moral realism for basically all the reasons described by Rawlette on 80k but I donāt think this really matters after conditioning on normative ethical beliefs
Nothing besides valenced qualia/āhedonic tone has intrinsic value
I think that might literally be itāeverything else is contingent!
I was inspired to create this market! I would appreciate it if you weighed in. :)