Thanks! Let me write them as a loss function in python (ha)
For real though:
Some flavor of hedonic utilitarianism
I guess I should say I have moral uncertainty (which I endorse as a thing) but eh I’m pretty convinced
Longtermism as explicitly defined is true
Don’t necessarily endorse the cluster of beliefs that tend to come along for the ride though
“Suffering focused total utilitarian” is the annoying phrase I made up for myself
I think many (most?) self-described total utilitarians give too little consideration/weight to suffering, and I don’t think it really matters (if there’s a fact of the matter) whether this is because of empirical or moral beliefs
Maybe my most substantive deviation from the default TU package is the following (defended here):
“Under a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing.”
Moral realism for basically all the reasons described by Rawlette on 80k but I don’t think this really matters after conditioning on normative ethical beliefs
Nothing besides valenced qualia/hedonic tone has intrinsic value
I think that might literally be it—everything else is contingent!
Thanks! Let me write them as a loss function in python (ha)
For real though:
Some flavor of hedonic utilitarianism
I guess I should say I have moral uncertainty (which I endorse as a thing) but eh I’m pretty convinced
Longtermism as explicitly defined is true
Don’t necessarily endorse the cluster of beliefs that tend to come along for the ride though
“Suffering focused total utilitarian” is the annoying phrase I made up for myself
I think many (most?) self-described total utilitarians give too little consideration/weight to suffering, and I don’t think it really matters (if there’s a fact of the matter) whether this is because of empirical or moral beliefs
Maybe my most substantive deviation from the default TU package is the following (defended here):
“Under a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing.”
Moral realism for basically all the reasons described by Rawlette on 80k but I don’t think this really matters after conditioning on normative ethical beliefs
Nothing besides valenced qualia/hedonic tone has intrinsic value
I think that might literally be it—everything else is contingent!
I was inspired to create this market! I would appreciate it if you weighed in. :)