[Question] How to assign numerical values to individual welfare?

Hi everyone!

I have the following question about utilitarianism. If you define

a utility function u, you have to sum over the welfare of all

sentinent beings. It is more or less possible to decide if

the welfare of one being is greater than the welfare of another being.

In order to determine the total utility an order relation is not

enough. In addition you have to assign a numerical value

to the individual welfare. Depending on how you choose

this value the best decisions can be vastly different.

To illustrate my point I introduce the following toy model: Let

there be n classes of sentinent beings whose mental

capacibilities are comparable. The first class could be insects,

the second rodents, the third apes, the fourth humans

and the fifth AIs with superhuman mental capacibilities.

I denote the set of all happy beings in the class i by

and the set of all suffering beings by . The

number of all beings in should be .

By observing the behaviour or the brain architecture

of the beings, we can get a rough idea if individuals in

have higher cognitive functions /​ a higher

welfare than members of (although even this may be

debatable). Mathematically speaking we have an order relation

on . In order to determine the total

utility u we need a monotonous function

such that

If f has large negative values for negative indices, we

obtain suffering focused ethics. If f is nearly constant

for positive indices, the ideal world would be populated only

by happy insects, since a fixed area can support more of

them. If f is nearly zero for non-human animals and 1

for humans and AIs, the ideal world would be populated by

as many happy humans (or emulated minds) as possible

and AI development would not be a high priority. If f

is increasing very fast, it would be the best to develop

superhuman AIs as fast as possible even at the expense of humans.

My question is if there is a non-arbitrary way to determine

f. It may be possible to define f in such a way

that it captures most of ethical intuitions. I am not sure

if this approach is to subjective, but I do not see how

to define f in terms that can be measured.

No comments.