If you weigh desires/āpreferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didnāt do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced āeliminate the pain!ā response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrainās āpreferenceā is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
If you donāt weigh desires by attention or their effects on attention, I donāt see how you can ground interpersonal utility comparisons at all
I donāt quite see what youāre driving at with this line of argument.
I can see how being able to firmly āgroundā things is a nice/āhelpful property for an theory of āwhat is good?ā to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that āwhat is good?ā has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just arenāt under the streetlight.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which canāt easily be measured or given a quantitative value. The third step is to presume that what canāt be measured easily really isnāt important. The fourth step is to say that what canāt be easily measured really doesnāt exist.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
Iāll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
Itās worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then thereās top-down/āvoluntary/āendogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesnāt.
I donāt mean to discount preferences if interpersonal comparisons canāt be grounded. I mean that if animals have such preferences, you canāt say theyāre less important (thereās no fact of the matter either way), as I said in my top-level comment.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didnāt do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced āeliminate the pain!ā response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrainās āpreferenceā is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
I donāt quite see what youāre driving at with this line of argument.
I can see how being able to firmly āgroundā things is a nice/āhelpful property for an theory of āwhat is good?ā to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that āwhat is good?ā has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just arenāt under the streetlight.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
Iāll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
SBF is the obvious example here, but really Iāve seen this so often in EA. Big fan of Warren Buffetās quote here:
Itās worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then thereās top-down/āvoluntary/āendogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesnāt.
I donāt mean to discount preferences if interpersonal comparisons canāt be grounded. I mean that if animals have such preferences, you canāt say theyāre less important (thereās no fact of the matter either way), as I said in my top-level comment.