I agree that those examples are compelling. I’m not sure if presentist person-affecting views are a particularly common alternative to longtermism. I guess it’s possible that a bunch of people consider themselves to have presentist views but they haven’t worked out the details. (Or maybe some call their views “presentist” but they’d have arguments on why their view says to do the thing you want them to do in the examples you give.) And you might say “this reflects poorly on proponents of person-affecting views; it seems like they tend to be less philosophically sophisticated.” I wouldn’t completely agree with that conclusion… Sure, consistency seems important. But I think the person-affecting intuition is very strong for some people and the way ethics works, you cannot get positions off the ground without some fundamental assumptions (“axioms”). If some people’s person-affecting intuitions s are so strong that the thought of turning an already existing small paradise into an instantiation of the repugnant conclusion seems completely unacceptable, that can function as (one of) someone’s moral axiom(s). And so their views may not be totally developed but that will still seem better to them (justifiably so) than adopting totalism, which – to them – would violate what feels like an axiom.
Other flavors of person-affecting views might not have this problem, though they encounter transitivity problems.
I recently published a post on why these “problems” don’t seem like a big deal from a particular vantage point. (Note that the view in question is still compatible with the not-strong formulations of longtermism in MacAskill’s definition, but for subtly different, more indirect reasons.) It’s hard to summarize the point because the post presents a different reasoning framework (“population ethics without an objective axiology”). But here’s an attempt at a summary (and some further relevant context) on why person-affecting views seem quite compelling to me within the particular framework “population ethics without an objective axiology:”
Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”
The axiology-focused approach goes as follows. First, there’s the search for axiology, a theory of (intrinsic) value. (E.g., the axiology may state that good experiences are what’s valuable.) Then, there’s further discussion on whether ethics contains other independent parts or whether everything derives from that axiology. For instance, a consequentialist may frame their disagreement with deontology as follows. “Consequentialism is the view that making the world a better place is all that matters, while deontologists think that other things (e.g., rights, duties) matter more.” Similarly, someone could frame population-ethical disagreements as follows. “Some philosophers think that all that matters is more value in the world and less disvalue (“totalism”). Others hold that further considerations also matter – for instance, it seems odd to compare someone’s existence to never having been born, so we can discuss what it means to benefit a person in such contexts.”
In both examples, the discussion takes for granted that there’s something that’s valuable in itself. The still-open questions come afterward, after “here’s what’s valuable.”
[...]
My alternative account, inspired by Johann Frick [...], says that things are good when they hold what we might call conditional value – when they stand in specific relation to people’s interests/goals. On this view, valuing the potential for happiness and flourishing in our long-run future isn’t a forced move. Instead, it depends on the nature and scope of existing people’s interests/goals and, for highly-morally-motivated people like effective altruists, on one’s favored notion of “doing the most moral/altruistic thing.”
[...]
“There’s no objective axiology” implies (among other things) that there’s no goal that’s correct for everyone who’s self-oriented to adopt. Accordingly, goals can differ between people (see my post, The Life-Goals Framework: How I Reason About Morality as an Anti-Realist). There are, I think, good reasons for conceptualizing ethics as being about goals/interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t see much use in conceptualizing ethics as being about things we can’t express in non-normative terminology.)
[...]
One arguably interesting feature of my framework is that it makes standard objections against person-affecting views no longer seem (as) problematic. A common opinion among effective altruists is that person-affecting views are difficult to make work.[6] In particular, the objection is that they give unacceptable answers to “What’s best for new people/beings.”[7] My framework highlights that maybe person-affecting views aren’t meant to answer that question. Instead, I’d argue that someone with a person-affecting view has answered a relevant earlier question so that “What’s best for new people/beings” no longer holds priority. Specifically, to the question “What’s the most moral altruistic/thing?,” they answered “Benefitting existing (or sure-to-exist) people/beings.” In that light, under-definedness around creating new people/beings is to be expected – it’s what happens when there’s a tradeoff between two possible values (here: the perspective of existing/sure-to-exist people and that of possible people) and someone decides that one option matters more than the other.
[...]
The transitivity of “better-than relations.”
For any ambitious morality, there’s an intuition that well-being differences in morally relevant others should always matter.[23] However, I think there’s an underappreciated justification/framing for person-affecting views where these views essentially say that possible people/beings are “morally relevant others” only according to minimal morality (so they are deliberately placed outside the scope of ambitious morality).
This part refers to a distinction between minimal morality and ambitious morality, which plays an important role in my reasoning framework:
Minimal morality is “don’t be a jerk” – it’s about respecting that others’ interests/goals may be different from yours. It is low-demanding, therefore compatible with non-moral life goals. It is “contractualist”[11] or “cooperation-focused” in spirit, but in a sense that stays nice even without an expectation of reciprocity.[12]
Ambitious morality is “doing the most moral/altruistic thing.” It is “care-morality,” “consequentialist” in spirit. It’s relevant for morally-motivated individuals (like effective altruists) for whom minimal morality isn’t demanding enough.
[...]
[In my framework], minimal isn’t just a low-demanding version of ambitious morality. In many contexts, it has its own authority – something that wouldn’t make sense within the axiology-focused framework. (After all, if an objective axiology governed all aspects of morality, a “low-demanding” morality would still be directed toward that axiology.)[13] In my framework, minimal morality is axiology-independent – it protects everyone’s interests/goals, not just those of proponents of a particular axiology.
So, on the one hand, morality can be about the question “If I want to do ‘the most moral/altruistic thing,’ how can I best benefit others?” – that’s ambitious morality. On the other hand, it can also be about the question “Given that others don’t necessarily share my interests/goals, what follows from that in terms of fairness norms for a civil society?” – that’s minimal morality (“contractualist” in spirit; “don’t be a jerk”).
I agree that person-affecting views don’t give satisfying answers to “what’s best for possible people/beings,” but that seems fine! It’s only within the axiology-focused approach that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.
There’s no objective axiology that tells us what’s best for possible people/beings and existing people/beings all at once. Therefore, since we’re driven by the desire to better specify what we mean by “doing the most moral/altruistic thing,” it seems like a defensible option to focus primarily on existing (and sure-to-exist) people/beings.
I agree that those examples are compelling. I’m not sure if presentist person-affecting views are a particularly common alternative to longtermism. I guess it’s possible that a bunch of people consider themselves to have presentist views but they haven’t worked out the details. (Or maybe some call their views “presentist” but they’d have arguments on why their view says to do the thing you want them to do in the examples you give.) And you might say “this reflects poorly on proponents of person-affecting views; it seems like they tend to be less philosophically sophisticated.” I wouldn’t completely agree with that conclusion… Sure, consistency seems important. But I think the person-affecting intuition is very strong for some people and the way ethics works, you cannot get positions off the ground without some fundamental assumptions (“axioms”). If some people’s person-affecting intuitions s are so strong that the thought of turning an already existing small paradise into an instantiation of the repugnant conclusion seems completely unacceptable, that can function as (one of) someone’s moral axiom(s). And so their views may not be totally developed but that will still seem better to them (justifiably so) than adopting totalism, which – to them – would violate what feels like an axiom.
I recently published a post on why these “problems” don’t seem like a big deal from a particular vantage point. (Note that the view in question is still compatible with the not-strong formulations of longtermism in MacAskill’s definition, but for subtly different, more indirect reasons.) It’s hard to summarize the point because the post presents a different reasoning framework (“population ethics without an objective axiology”). But here’s an attempt at a summary (and some further relevant context) on why person-affecting views seem quite compelling to me within the particular framework “population ethics without an objective axiology:”
[...]
[...]
[...]
[...]
This part refers to a distinction between minimal morality and ambitious morality, which plays an important role in my reasoning framework:
[...]
So, on the one hand, morality can be about the question “If I want to do ‘the most moral/altruistic thing,’ how can I best benefit others?” – that’s ambitious morality. On the other hand, it can also be about the question “Given that others don’t necessarily share my interests/goals, what follows from that in terms of fairness norms for a civil society?” – that’s minimal morality (“contractualist” in spirit; “don’t be a jerk”).
There’s no objective axiology that tells us what’s best for possible people/beings and existing people/beings all at once. Therefore, since we’re driven by the desire to better specify what we mean by “doing the most moral/altruistic thing,” it seems like a defensible option to focus primarily on existing (and sure-to-exist) people/beings.