say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike
I’m curious how, excluding phenomenal definitions, you define he defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.
I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
I’m curious how, excluding phenomenal definitions,
you definehe defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (Rothman, 2017, 2018, p.168-169, 2019, 2021, 1:16:30-1:18:00), Dung (2022) and Wilterson and Graziano, 2021.
This is separate from their intensity or strength.