I can try, but honestly I donât know where to start; Iâm well-aware that Iâm out of my depth philosophically, and this section just doesnât chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that âweâ already do things a certain way than an argument for why weshould do things that way, and I donât seem to be part of the âweâ. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to âwhy I donât buy thisâ than âwhy I think youâre wrongâ.
***
The strengths of our reasons to reduce human suffering or satisfy human belief-like preferences, say, donât typically seem to depend on our understanding of their empirical or descriptive nature. This is not how we actually do ethics.
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I donât think youâre aiming for that? Regardless, itâs not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the âstrength of my reasonâ to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesnât really say anything about the experience, while mine above does?
Instead, we directly value our experiences, not our knowledge of what exactly generates them...how suffering feels to us and how bad it feels to us...do not change with our understanding of its nature
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long itâs going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase âlight at the end of the tunnelâ come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know itâll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words âpainâ and âsufferingâ.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told itâll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my âscoringâ of painâs importance as well, e.g. a sense of whether itâs for worthwhile reasons.
And it could end up being the case â i.e. with nonzero probability â that chickens donât matter at all, not even infinitesimally...And because of the possible division by 0 moral weight, the expected moral weights of humans and all other animals will be infinite or undefined. It seems such a view wouldnât be useful for guiding action.
Iâm with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
However, in principle, humans in general or each proposed type of wellbeing could not matter with nonzero probability, so we could get a similar problem normalizing by human welfare or moral weights.
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more âfog of warâ, very much including my own experiences, values and value. Soâand this may be unfairâI feel like youâre asking me âwhy isnât this clear to you?â and Iâm like âI donât know what to tell you, it just doesnât look that simple from where Iâm sittingâ.
Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty.
I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases â hence âtypicallyâ â, not very specific cases. So, Iâd allow exceptions.
Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when youâre feeling it). Similarly, your understanding of the general neuroscience of desire wonât usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)
This is what I need, when we think about looking for experiences like ours in other animals.
On your specific cases below.
The fallible pain memory case could be an exception. I suspect thereâs also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didnât have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.
In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldnât be a problem.
And knowing how long a pain will last probably often does affect how bad/âintense the overall experience (including possible stress/âfear/âanxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like âwhether itâs for worthwhile reasonsâ. This is all part of the experience.
Really, ~all memories of experiences will be at least somewhat off, and theyâre probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.
I can try, but honestly I donât know where to start; Iâm well-aware that Iâm out of my depth philosophically, and this section just doesnât chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that âweâ already do things a certain way than an argument for why we should do things that way, and I donât seem to be part of the âweâ. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to âwhy I donât buy thisâ than âwhy I think youâre wrongâ.
***
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I donât think youâre aiming for that? Regardless, itâs not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the âstrength of my reasonâ to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesnât really say anything about the experience, while mine above does?
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long itâs going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase âlight at the end of the tunnelâ come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know itâll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words âpainâ and âsufferingâ.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told itâll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my âscoringâ of painâs importance as well, e.g. a sense of whether itâs for worthwhile reasons.
Iâm with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more âfog of warâ, very much including my own experiences, values and value. Soâand this may be unfairâI feel like youâre asking me âwhy isnât this clear to you?â and Iâm like âI donât know what to tell you, it just doesnât look that simple from where Iâm sittingâ.
Though perhaps not quite to zero; it seems I would need to think about how much of the total suffering is the memory of suffering.
Thanks, this is helpful!
I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases â hence âtypicallyâ â, not very specific cases. So, Iâd allow exceptions.
Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when youâre feeling it). Similarly, your understanding of the general neuroscience of desire wonât usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)
This is what I need, when we think about looking for experiences like ours in other animals.
On your specific cases below.
The fallible pain memory case could be an exception. I suspect thereâs also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didnât have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.
In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldnât be a problem.
And knowing how long a pain will last probably often does affect how bad/âintense the overall experience (including possible stress/âfear/âanxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like âwhether itâs for worthwhile reasonsâ. This is all part of the experience.
The valuing itself is also part of the impression as a whole, but your valuing is applied to or a response to parts of the impression.
Really, ~all memories of experiences will be at least somewhat off, and theyâre probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.