Could you elaborate on this? I might have worded things poorly. To rephrase and add a bit more, I meant something like
We understand welfare and its moral value in relation to our own experiences, including our own hedonic states, desires, preferences and intuitions. We use personal reference point experiences, and understand other experiences and their value — in ourselves and others — relative to those reference point experiences.
(These personal reference point experiences can also be empathetic responses to others, which might complicate things.)
Anticipating and responding to some potential sources of misunderstanding:
I didn’t intend to claim we’re all experientialists and so only care about the contents of experiences, rather than, say, how our desires relate to the actual states of the world. The arguments don’t depend on experientialism.
I mostly illustrated the arguments with suffering, which may give/reinforce the impression that I’m saying our understanding of value is based on hedonic states only, but I didn’t intend that.
I can try, but honestly I don’t know where to start; I’m well-aware that I’m out of my depth philosophically, and this section just doesn’t chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that ‘we’ already do things a certain way than an argument for why weshould do things that way, and I don’t seem to be part of the ‘we’. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to ‘why I don’t buy this’ than ‘why I think you’re wrong’.
***
The strengths of our reasons to reduce human suffering or satisfy human belief-like preferences, say, don’t typically seem to depend on our understanding of their empirical or descriptive nature. This is not how we actually do ethics.
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I don’t think you’re aiming for that? Regardless, it’s not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the ‘strength of my reason’ to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesn’t really say anything about the experience, while mine above does?
Instead, we directly value our experiences, not our knowledge of what exactly generates them...how suffering feels to us and how bad it feels to us...do not change with our understanding of its nature
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long it’s going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase ‘light at the end of the tunnel’ come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know it’ll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words ‘pain’ and ‘suffering’.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told it’ll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my ‘scoring’ of pain’s importance as well, e.g. a sense of whether it’s for worthwhile reasons.
And it could end up being the case — i.e. with nonzero probability — that chickens don’t matter at all, not even infinitesimally...And because of the possible division by 0 moral weight, the expected moral weights of humans and all other animals will be infinite or undefined. It seems such a view wouldn’t be useful for guiding action.
I’m with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
However, in principle, humans in general or each proposed type of wellbeing could not matter with nonzero probability, so we could get a similar problem normalizing by human welfare or moral weights.
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more ‘fog of war’, very much including my own experiences, values and value. So—and this may be unfair—I feel like you’re asking me ‘why isn’t this clear to you?’ and I’m like ‘I don’t know what to tell you, it just doesn’t look that simple from where I’m sitting’.
Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty.
I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases — hence “typically” —, not very specific cases. So, I’d allow exceptions.
Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when you’re feeling it). Similarly, your understanding of the general neuroscience of desire won’t usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)
This is what I need, when we think about looking for experiences like ours in other animals.
On your specific cases below.
The fallible pain memory case could be an exception. I suspect there’s also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didn’t have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.
In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldn’t be a problem.
And knowing how long a pain will last probably often does affect how bad/intense the overall experience (including possible stress/fear/anxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like “whether it’s for worthwhile reasons”. This is all part of the experience.
Really, ~all memories of experiences will be at least somewhat off, and they’re probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.
Could you elaborate on this? I might have worded things poorly. To rephrase and add a bit more, I meant something like
(These personal reference point experiences can also be empathetic responses to others, which might complicate things.)
The section the summary bullet point you quoted links to is devoted to arguing for that claim.
Anticipating and responding to some potential sources of misunderstanding:
I didn’t intend to claim we’re all experientialists and so only care about the contents of experiences, rather than, say, how our desires relate to the actual states of the world. The arguments don’t depend on experientialism.
I mostly illustrated the arguments with suffering, which may give/reinforce the impression that I’m saying our understanding of value is based on hedonic states only, but I didn’t intend that.
I can try, but honestly I don’t know where to start; I’m well-aware that I’m out of my depth philosophically, and this section just doesn’t chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that ‘we’ already do things a certain way than an argument for why we should do things that way, and I don’t seem to be part of the ‘we’. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to ‘why I don’t buy this’ than ‘why I think you’re wrong’.
***
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I don’t think you’re aiming for that? Regardless, it’s not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the ‘strength of my reason’ to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesn’t really say anything about the experience, while mine above does?
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long it’s going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase ‘light at the end of the tunnel’ come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know it’ll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words ‘pain’ and ‘suffering’.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told it’ll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my ‘scoring’ of pain’s importance as well, e.g. a sense of whether it’s for worthwhile reasons.
I’m with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more ‘fog of war’, very much including my own experiences, values and value. So—and this may be unfair—I feel like you’re asking me ‘why isn’t this clear to you?’ and I’m like ‘I don’t know what to tell you, it just doesn’t look that simple from where I’m sitting’.
Though perhaps not quite to zero; it seems I would need to think about how much of the total suffering is the memory of suffering.
Thanks, this is helpful!
I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases — hence “typically” —, not very specific cases. So, I’d allow exceptions.
Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when you’re feeling it). Similarly, your understanding of the general neuroscience of desire won’t usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)
This is what I need, when we think about looking for experiences like ours in other animals.
On your specific cases below.
The fallible pain memory case could be an exception. I suspect there’s also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didn’t have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.
In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldn’t be a problem.
And knowing how long a pain will last probably often does affect how bad/intense the overall experience (including possible stress/fear/anxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like “whether it’s for worthwhile reasons”. This is all part of the experience.
The valuing itself is also part of the impression as a whole, but your valuing is applied to or a response to parts of the impression.
Really, ~all memories of experiences will be at least somewhat off, and they’re probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.