If you weigh desires/​preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didn’t do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced ‘eliminate the pain!’ response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrain’s ‘preference’ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
If you don’t weigh desires by attention or their effects on attention, I don’t see how you can ground interpersonal utility comparisons at all
I don’t quite see what you’re driving at with this line of argument.
I can see how being able to firmly ‘ground’ things is a nice/​helpful property for an theory of ‘what is good?’ to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that ‘what is good?’ has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just aren’t under the streetlight.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which can’t easily be measured or given a quantitative value. The third step is to presume that what can’t be measured easily really isn’t important. The fourth step is to say that what can’t be easily measured really doesn’t exist.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
I’ll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
It’s worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there’s top-down/​voluntary/​endogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn’t.
I don’t mean to discount preferences if interpersonal comparisons can’t be grounded. I mean that if animals have such preferences, you can’t say they’re less important (there’s no fact of the matter either way), as I said in my top-level comment.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didn’t do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced ‘eliminate the pain!’ response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrain’s ‘preference’ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
I don’t quite see what you’re driving at with this line of argument.
I can see how being able to firmly ‘ground’ things is a nice/​helpful property for an theory of ‘what is good?’ to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that ‘what is good?’ has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just aren’t under the streetlight.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
I’ll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
SBF is the obvious example here, but really I’ve seen this so often in EA. Big fan of Warren Buffet’s quote here:
It’s worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there’s top-down/​voluntary/​endogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn’t.
I don’t mean to discount preferences if interpersonal comparisons can’t be grounded. I mean that if animals have such preferences, you can’t say they’re less important (there’s no fact of the matter either way), as I said in my top-level comment.