People generally don’t care about their future QALYs in a linear way: a 1/million chance of living 10 million times as long and otherwise dying immediately is very unappealing to most people, and so forth. If you don’t evaluate future QALYs for current people in a way they find acceptable, then you’ll wind up generating recommendations that are contrary to their preferences and which will not be accepted by society at large.
This sort of argument shows that person-affecting utilitarianism is a very wacky doctrine (also see this) that doesn’t actually sweep away issues of the importance of the future as some say, but it doesn’t override normal people concerns by their own lights.
People generally don’t care about their future QALYs in a linear way: a 1/million chance of living 10 million times as long and otherwise dying immediately is very unappealing to most people, and so forth. If you don’t evaluate future QALYs for current people in a way they find acceptable, then you’ll wind up generating recommendations that are contrary to their preferences and which will not be accepted by society at large.
This sort of argument shows that person-affecting utilitarianism is a very wacky doctrine (also see this) that doesn’t actually sweep away issues of the importance of the future as some say, but it doesn’t override normal people concerns by their own lights.