I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an “‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective”. I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in “impartially maximize value over the next few hundres of years”. I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these ‘causes’ I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it’s somewhat hard to move me with arguments in that domain (it’s not impossible, but it would require something that’s more similar to psychotherapy or raising a child or “things the humanities do” than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be “lucky” that X is among the things I happen to like for idiosyncratic reasons—or X needs to look like a priority from an impartially consequentialist outlook.
It sounds like we both agree that when it comes to reflecting about what’s important to us, there should maybe be a place for stuff like “(idiosyncratic) reactive attitudes,” “psychotherapy or raising a child or ‘things the humanities do’” etc.
Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).
My point with my long comment earlier is basically the following: The separation between these two modes is not clear!
I’d argue that what you think of the “impartial mode” has some clear-cut applications, but it’s under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you’d normally place in the subjectivist/particularist/existentialist mode.
Specifically, population ethics is under-defined. (It’s also under-defined how to extract “idealized human preferences” from people like my parents, who aren’t particularly interested in moral philosophy or rationality.)
I’m trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and “never violating any transitivity axioms” as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the ‘cosmic commons’) that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn’t be done with that garden. You can look for the “impartially best way to make use of the garden” – or you could look at how other people want to use the garden and compromise with them, or look for “meta-principles” that guide who gets to use which parts of the garden (and stuff that people definitely shouldn’t do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it’s all made use of. Basically, I’m saying that “knowing from the very beginning exactly what the ‘best garden’ has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there’s no universally correct solution anyway!). You’re very much allowed to think of gardening in a different, more procedural and ‘particularist’ way.”
Thanks! I think I basically agree with everything you say in this comment. I’ll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly ‘metaethical’ level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the “impartial consequentialist” outlook, I (at least on my best-guess view on metaethics) don’t claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.
I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an “‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective”. I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in “impartially maximize value over the next few hundres of years”. I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these ‘causes’ I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it’s somewhat hard to move me with arguments in that domain (it’s not impossible, but it would require something that’s more similar to psychotherapy or raising a child or “things the humanities do” than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be “lucky” that X is among the things I happen to like for idiosyncratic reasons—or X needs to look like a priority from an impartially consequentialist outlook.
It sounds like we both agree that when it comes to reflecting about what’s important to us, there should maybe be a place for stuff like “(idiosyncratic) reactive attitudes,” “psychotherapy or raising a child or ‘things the humanities do’” etc.
Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).
My point with my long comment earlier is basically the following:
The separation between these two modes is not clear!
I’d argue that what you think of the “impartial mode” has some clear-cut applications, but it’s under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you’d normally place in the subjectivist/particularist/existentialist mode.
Specifically, population ethics is under-defined. (It’s also under-defined how to extract “idealized human preferences” from people like my parents, who aren’t particularly interested in moral philosophy or rationality.)
I’m trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and “never violating any transitivity axioms” as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the ‘cosmic commons’) that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn’t be done with that garden. You can look for the “impartially best way to make use of the garden” – or you could look at how other people want to use the garden and compromise with them, or look for “meta-principles” that guide who gets to use which parts of the garden (and stuff that people definitely shouldn’t do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it’s all made use of. Basically, I’m saying that “knowing from the very beginning exactly what the ‘best garden’ has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there’s no universally correct solution anyway!). You’re very much allowed to think of gardening in a different, more procedural and ‘particularist’ way.”
Thanks! I think I basically agree with everything you say in this comment. I’ll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly ‘metaethical’ level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the “impartial consequentialist” outlook, I (at least on my best-guess view on metaethics) don’t claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.