I agree that psychologial harms (intrinsically) matter and that the fact that some such harms are contingent on the harmed persons having certain beliefs, attitudes or dispositions (i.e. their psychology) raises complicated questions.
That said, I don’t think that a simple framework based around whether it is easier to minimise harm by changing the offending ‘actions’ (fwiw, it seems like this could include broader states of affairs) or the harmed person’s psychology, will suffice.
We probably also need to be concerned with whether the harmed person’s beliefs are true or false and whether their attitudes are fitting (not merely whether they are fortunate) (see Chappell, 2009).
For example, if Sam comments on Alex’s post on the Forum and Alex experiences harm due to taking this in a certain way, it’s probably important to know whether their Alex’s response is itself appropriate. (Obviously there are various complexities about how this might go: Alex might reasonably/unreasonably have true/false beliefs and have fitting/unfitting attitudes which result in appropriate/inappropriate responses, in any number of different combinations).
We might have non-consequentialist reasons to care about each of these things (i.e. not wanting people to have to form false beliefs or inappropriate attitudes, even if it would lead to fortunate outcomes if they did). A famous example of this concerns the possibility of adaptive preferences, i.e. it seems intuitively troubling if someone or some group who face poor prospects, form low expectations in light of this fact and are thereby satisfied receiving little (and less than they could in better circumstances).
But we might also have consequentialist grounds for not taking a naive approach based on asking whether it would be easier for Alex or Sam to change to reduce the harm caused to Alex. Whichever might seem easier in a particular case or set of cases, it seems reasonable to think there might be significant downstream costs to people having having false beliefs or unreasonable responses. This is especially so given that, as you note, what incentives we establish here can encourage different ‘affective ideologies’ or different individual psychologies to propagate (especially since people have some capacity to ‘tie themselves to the mast’ and make it such that they could not cheaply change their attitudes (even if they otherwise would have been able to)).
I agree that psychologial harms (intrinsically) matter and that the fact that some such harms are contingent on the harmed persons having certain beliefs, attitudes or dispositions (i.e. their psychology) raises complicated questions.
That said, I don’t think that a simple framework based around whether it is easier to minimise harm by changing the offending ‘actions’ (fwiw, it seems like this could include broader states of affairs) or the harmed person’s psychology, will suffice.
We probably also need to be concerned with whether the harmed person’s beliefs are true or false and whether their attitudes are fitting (not merely whether they are fortunate) (see Chappell, 2009).
For example, if Sam comments on Alex’s post on the Forum and Alex experiences harm due to taking this in a certain way, it’s probably important to know whether their Alex’s response is itself appropriate. (Obviously there are various complexities about how this might go: Alex might reasonably/unreasonably have true/false beliefs and have fitting/unfitting attitudes which result in appropriate/inappropriate responses, in any number of different combinations).
We might have non-consequentialist reasons to care about each of these things (i.e. not wanting people to have to form false beliefs or inappropriate attitudes, even if it would lead to fortunate outcomes if they did). A famous example of this concerns the possibility of adaptive preferences, i.e. it seems intuitively troubling if someone or some group who face poor prospects, form low expectations in light of this fact and are thereby satisfied receiving little (and less than they could in better circumstances).
But we might also have consequentialist grounds for not taking a naive approach based on asking whether it would be easier for Alex or Sam to change to reduce the harm caused to Alex. Whichever might seem easier in a particular case or set of cases, it seems reasonable to think there might be significant downstream costs to people having having false beliefs or unreasonable responses. This is especially so given that, as you note, what incentives we establish here can encourage different ‘affective ideologies’ or different individual psychologies to propagate (especially since people have some capacity to ‘tie themselves to the mast’ and make it such that they could not cheaply change their attitudes (even if they otherwise would have been able to)).
Agree that this is an important consideration! See my response above for a reply to a similar comment :-)