When a range of different happiness levels of future people are available, it is hard to have a time consistent preferences that doesn’t consist of “making happy people”, or “stopping miserable people being made”.
Suppose you are presented with these 3 options.
Alice is born and has happiness level A
Alice is born and has happiness level B
Alice is not born.
In order to be indifferent between all 3 options and have time consistent preferences, you must be indifferent towards
Alice has happiness level A
Alice has happiness level B
Its possible to have some level of happiness L, and be indifferent to people with levels >L existing, but against people having levels lower than L. This does mean a form of negative utilitarianism that would have killed the first primitive life given the chance.
Its also allowed to say, my preferences aren’t time consistent. I can be money pumped. Every time a baby is born, my preferences shift from not caring about that baby to caring about that baby.
This means you should want to sign a binding oath saying you won’t go out of your way to help anyone who hadn’t been born at the time of signing the oath. (At the time of signing, you by hypothesis don’t care about them. You want to stop your future self wasting resources on someone you currently don’t care about.)
Or maybe you decide you are a human, not an AI. Any clear utility function you write down will be goodheartable. You will just wing it based on intuition and hope.
As someone with some sort of person-affecting view, I think there’s a relevant distinction to be made between (1) not caring about potential/future people, and (2) being neutral about how many potential/future people exist. Personally, I do care about future people, so I wouldn’t sign the binding oath. In 50 years, if we don’t go extinct, there will be lots of people existing who don’t exist now—I want those people to have good lives, even though they are only potential people now. For me, taking action so that future people are happy falls under ‘making people happy’.
Thank you both. I think my intuition is like Amber’s here. Obviously I care about any human that will be born as soon as they are born, but I cannot seem to make myself about how many humans there will be (unless that number has an impact on the ones that are around).
When a range of different happiness levels of future people are available, it is hard to have a time consistent preferences that doesn’t consist of “making happy people”, or “stopping miserable people being made”.
Suppose you are presented with these 3 options.
Alice is born and has happiness level A
Alice is born and has happiness level B
Alice is not born.
In order to be indifferent between all 3 options and have time consistent preferences, you must be indifferent towards
Alice has happiness level A
Alice has happiness level B
Its possible to have some level of happiness L, and be indifferent to people with levels >L existing, but against people having levels lower than L. This does mean a form of negative utilitarianism that would have killed the first primitive life given the chance.
Its also allowed to say, my preferences aren’t time consistent. I can be money pumped. Every time a baby is born, my preferences shift from not caring about that baby to caring about that baby.
This means you should want to sign a binding oath saying you won’t go out of your way to help anyone who hadn’t been born at the time of signing the oath. (At the time of signing, you by hypothesis don’t care about them. You want to stop your future self wasting resources on someone you currently don’t care about.)
Or maybe you decide you are a human, not an AI. Any clear utility function you write down will be goodheartable. You will just wing it based on intuition and hope.
As someone with some sort of person-affecting view, I think there’s a relevant distinction to be made between (1) not caring about potential/future people, and (2) being neutral about how many potential/future people exist. Personally, I do care about future people, so I wouldn’t sign the binding oath. In 50 years, if we don’t go extinct, there will be lots of people existing who don’t exist now—I want those people to have good lives, even though they are only potential people now. For me, taking action so that future people are happy falls under ‘making people happy’.
Thank you both.
I think my intuition is like Amber’s here. Obviously I care about any human that will be born as soon as they are born, but I cannot seem to make myself about how many humans there will be (unless that number has an impact on the ones that are around).