The intransitivity problem that you address is very similar to the problem of simultaneity or synchronicity in special relativity. https://en.wikipedia.org/wiki/Relativity_of_simultaneity Consider three space-time points (events) P1, P2 and P3. The point P1 has a future and a past light cone. Points in the future light cone are in the future of P1 (i.e. a later time according to all observers). Suppose P2 and P3 are outside of the future and past light cones of P1. Then it is possible to choose a reference frame (e.g. a non-accelerating rocket) such that P1 and P2 have the same time coordinate and hence are simultaneous space-time events: the person in the rocket sees the two events happening at the same time according to his personal clock. It is also possible to perform a Lorentz transformation towards another reference frame, e.g. a second rocket moving at constant speed relative to the first rocket, such that P1 and P3 are simultaneous (i.e. the person in the second rocket sees P1 and P3 at the same time according to her personal clock). But… it is possible that P3 is in the future light cone of P2, which means that all observers agree that event P3 happens after P2 (at a later time according to all clocks). So, special relativity involves a special kind of intransitivity: P2 is simultaneous to P1, P1 is simultaneous to P3, and P3 happens later than P2. This does not make space-time inconsistent or irrational, neither does it make the notion of time incomprehensible. The same goes for person-affecting views. In the analogy: the time coordinate corresponds to a person’s utility level. A later time means a higher utility. You can formulate a person-affecting axiology that is ‘Lorentz invariant’ just like in special relativity.
My favorite population ethical theory is variable critical level utilitarianism
This theory is in many EA-relevant cases (e.g. dealing with X-risks) equal to total utilitarianism, except that it avoids the very repugnant conclusion: situation A involves N extremely happy people, situation B involves the same N people, now extremely miserable (very negative utility), plus a huge number M of extra people with lives barely worth living (small positive utility). According to total utilitarianism, situation B would be better if M is large enough. I’m willing to bite the bullet of the repugnant conclusion, but this very repugnant conclusion is for me one of the most counterintuitive conclusions in population ethics. VCLU can easily avoid this.
The intransitivity problem that you address is very similar to the problem of simultaneity or synchronicity in special relativity. https://en.wikipedia.org/wiki/Relativity_of_simultaneity Consider three space-time points (events) P1, P2 and P3. The point P1 has a future and a past light cone. Points in the future light cone are in the future of P1 (i.e. a later time according to all observers). Suppose P2 and P3 are outside of the future and past light cones of P1. Then it is possible to choose a reference frame (e.g. a non-accelerating rocket) such that P1 and P2 have the same time coordinate and hence are simultaneous space-time events: the person in the rocket sees the two events happening at the same time according to his personal clock. It is also possible to perform a Lorentz transformation towards another reference frame, e.g. a second rocket moving at constant speed relative to the first rocket, such that P1 and P3 are simultaneous (i.e. the person in the second rocket sees P1 and P3 at the same time according to her personal clock). But… it is possible that P3 is in the future light cone of P2, which means that all observers agree that event P3 happens after P2 (at a later time according to all clocks). So, special relativity involves a special kind of intransitivity: P2 is simultaneous to P1, P1 is simultaneous to P3, and P3 happens later than P2. This does not make space-time inconsistent or irrational, neither does it make the notion of time incomprehensible. The same goes for person-affecting views. In the analogy: the time coordinate corresponds to a person’s utility level. A later time means a higher utility. You can formulate a person-affecting axiology that is ‘Lorentz invariant’ just like in special relativity.
My favorite population ethical theory is variable critical level utilitarianism
https://stijnbruers.wordpress.com/2020/04/26/a-game-theoretic-solution-to-population-ethics/
https://stijnbruers.files.wordpress.com/2018/02/variable-critical-level-utilitarianism-2.pdf
This theory is in many EA-relevant cases (e.g. dealing with X-risks) equal to total utilitarianism, except that it avoids the very repugnant conclusion: situation A involves N extremely happy people, situation B involves the same N people, now extremely miserable (very negative utility), plus a huge number M of extra people with lives barely worth living (small positive utility). According to total utilitarianism, situation B would be better if M is large enough. I’m willing to bite the bullet of the repugnant conclusion, but this very repugnant conclusion is for me one of the most counterintuitive conclusions in population ethics. VCLU can easily avoid this.
Thanks, I’ll check out your writings on VCLU!