I’m happy to see more discussion of bargaining approaches to moral uncertainty, thanks for writing this! Apologies, this comment is longer than intended—I hope you don’t mind me echoing your Pascalian slogan!
My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent’s credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset. Clearly, credences should play a role in how we should make decisions under moral uncertainty but it’s not obvious that this is the right role for them to play. In Greaves and Cotton-Barratt (2019), this isn’t the role that credences play. Rather, credences feed into the computation of the asymmetric Nash Bargaining Solution (NBS), as in their equation (1). Roughly, credences can be thought to correspond to the relative bargaining power of the various moral theories. There’s no guarantee that the resulting bargaining solution allocates resources to each theory in proportion to the agent’s credences and this formal bargaining approach seems much more principled than allocating resources in proportion to credences, so I prefer the former. I doubt your conclusions significantly depend on this but I think it’s important to be aware that what you described isn’t the same as the bargaining procedure in Greaves and Cotton-Barratt (2019).
I like how you go through how a few different scenarios might play out in Section 2 but while I think intuition can be a useful guide, I think it’s hard to how things would play out without taking a more formal approach. My guess is that if you formalised these decisions and computed the NBS that things would often but not always work out as you hypothesise (e.g. divisible resources with unrelated priorities won’t always lead to worldview diversification; there will be cases in which all resources go to one theory’s preferred option).
I’m a little uncomfortable with the distinction between conflicting priorities and unrelated priorities because unrelated priorities are conflicting once you account for opportunity costs: any dollars spent on theory A’s priority can’t be spent on theory B’s priority (so long as these priorities are different). However, I think you’re pointing at something real here and that cases you describe as “conflicting priorities” will tend to lead to spending resources on compromise options rather than splitting the pot, and that the reverse is true for cases you describe as “unrelated priorities”.
The value of moral information consideration is interesting. It should be possible to provide a coherent account of the value of moral information for IB because the definition of the value of information doesn’t really depend on the details of how the agent makes a decision. Acquiring moral information can be seen as an act/option etc. just like any other and all the moral theories will have views about how good it would be and IB can determine whether the agent should choose that option vs other options. In particular, if the agent is indifferent (as determined by IB) between 1. acquiring some moral information and paying $x and 2. not acquiring the information and paying nothing, then we can say that the value of the information to the agent is $x. Actually computing this will be hard because it will depend on all future decisions (as changing credences will change future bargaining power), but it’s possible in principle and I don’t think it’s substantially different to/harder than the value of moral information on MEC. However, I worry that IB might give quite an implausible account of the value of moral information, for some of the reasons you mention. Moral information that increases the agent’s credence in theory A will give theory A greater bargaining power in future decisions, so theory A will value such information. But if that information lowers theory B’s bargaining power, then theory B will be opposed to obtaining the information. It seems likely that the agent will problematically undervalue moral information in some cases. I haven’t thought through the details of this though.
I didn’t find the small vs grand worlds objection in Greaves and Cotton-Barratt (2019) very compelling and agree with your response. It seems to me to be analogous to the objections to utilitarianism based on the infeasibility of computing utilities in practice (which I don’t find very compelling).
On regress: perhaps I’m misunderstanding you, but this seems to me to be a universal problem in that we will always be uncertain about how we should make decisions under moral uncertainty. We might have credences in MFT, MEC and IB, but which of these (if any) should we use to decide what to do under uncertainty about what to do under moral uncertainty (and so on...)?
I think you have a typo in the table comparing MFT, MEC and IB: MEC shouldn’t be non-fanatical. Relatedly, my reading of Greaves and Cotton-Barratt (2019) is that IB is more robust to fanaticism but still recommends fanatical choices sometimes (and whether it does so in practice is an open question), so a tick here might be overly generous (though I agree that IB has an advantage over MEC here, to the extent that avoiding fanaticism is desirable).
One concern with IB that you don’t mention is that the NBS depends on a “disagreement point” but it’s not clear what this disagreement point should be. The disagreement point represents the utilities obtained if the bargainers fail to reach an agreement. I think the random dictator disagreement point in Greaves and Cotton-Barratt (2019) seems quite natural for many decision problems, but I think this dependence on a disagreement point counts against bargaining approaches.
My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent’s credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset
I don’t think I understand the thinking here. It seems fairly natural to say “I am 80% confident in theory A, so that gets 80% of my resources, etc.”, and then to think about what would happen after that. It’s not intuitive to say “I am 80% confident in utilitarianism, that gets 80% ‘bargaining power’”. But I accept it’s an open question, if we want to do something internal bargaining, what the best version of that is.
One concern with IB that you don’t mention is that the NBS depends on a “disagreement point” but it’s not clear what this disagreement point should be. The disagreement point represents the utilities obtained if the bargainers fail to reach an agreement. I think the random dictator disagreement point in Greaves and Cotton-Barratt (2019) seems quite natural for many decision problems, but I think this dependence on a disagreement point counts against bargaining approaches.
I do mention the challenge of the disagreement point (see footnote 7). Again, I agree that this is the sort of thing that merits further inquiry. I’m not sold on the ‘random dictator point’, which, if I understood correctly, is identical to running a lottery where each theory has a X% chance of getting their top choice (where X% represents your credence in that theory). I note in part of section 2 that bargaining agents will likely think it preferable, by their own lights, to bargain over time rather than resolve things with lotteries. It’s for this reason I’m also inclined to prefer a ‘moral marketplace’ over a ‘moral parliament’: the former is what the sub-agents would themselves prefer.
On regress: perhaps I’m misunderstanding you, but this seems to me to be a universal problem in that we will always be uncertain about how we should make decisions under moral uncertainty. We might have credences in MFT, MEC and IB, but which of these (if any) should we use to decide what to do under uncertainty about what to do under moral uncertainty (and so on...)?
I didn’t really explain myself here, but there might be better vs worse regress problems. I haven’t worked out my thoughts enough yet to write something useful.
I’m a little uncomfortable with the distinction between conflicting priorities and unrelated priorities because unrelated priorities are conflicting once you account for opportunity costs: any dollars spent on theory A’s priority can’t be spent on theory B’s priority (so long as these priorities are different). However, I think you’re pointing at something real here and that cases you describe as “conflicting priorities” will tend to lead to spending resources on compromise options rather than splitting the pot, and that the reverse is true for cases you describe as “unrelated priorities”.
Agree the distinction would be tightened up. And yes, important bit seems to be whether agents will just ‘do their own thing’ vs consider moral trade (and moral ‘trade wars’)
I like how you go through how a few different scenarios might play out in Section 2 but while I think intuition can be a useful guide, I think it’s hard to how things would play out without taking a more formal approach.
I don’t really disagree. However, as I stated, my purpose was to give people ‘a feel’ for the view I doubt they would get from Greaves and Cotton-Barrett’s paper (and I certainly didn’t get when I did). The idea was to sketch a ‘quick-and-dirty’ version of the view to see if it was worth doing with greater precision.
I’m happy to see more discussion of bargaining approaches to moral uncertainty, thanks for writing this! Apologies, this comment is longer than intended—I hope you don’t mind me echoing your Pascalian slogan!
My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent’s credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset. Clearly, credences should play a role in how we should make decisions under moral uncertainty but it’s not obvious that this is the right role for them to play. In Greaves and Cotton-Barratt (2019), this isn’t the role that credences play. Rather, credences feed into the computation of the asymmetric Nash Bargaining Solution (NBS), as in their equation (1). Roughly, credences can be thought to correspond to the relative bargaining power of the various moral theories. There’s no guarantee that the resulting bargaining solution allocates resources to each theory in proportion to the agent’s credences and this formal bargaining approach seems much more principled than allocating resources in proportion to credences, so I prefer the former. I doubt your conclusions significantly depend on this but I think it’s important to be aware that what you described isn’t the same as the bargaining procedure in Greaves and Cotton-Barratt (2019).
I like how you go through how a few different scenarios might play out in Section 2 but while I think intuition can be a useful guide, I think it’s hard to how things would play out without taking a more formal approach. My guess is that if you formalised these decisions and computed the NBS that things would often but not always work out as you hypothesise (e.g. divisible resources with unrelated priorities won’t always lead to worldview diversification; there will be cases in which all resources go to one theory’s preferred option).
I’m a little uncomfortable with the distinction between conflicting priorities and unrelated priorities because unrelated priorities are conflicting once you account for opportunity costs: any dollars spent on theory A’s priority can’t be spent on theory B’s priority (so long as these priorities are different). However, I think you’re pointing at something real here and that cases you describe as “conflicting priorities” will tend to lead to spending resources on compromise options rather than splitting the pot, and that the reverse is true for cases you describe as “unrelated priorities”.
The value of moral information consideration is interesting. It should be possible to provide a coherent account of the value of moral information for IB because the definition of the value of information doesn’t really depend on the details of how the agent makes a decision. Acquiring moral information can be seen as an act/option etc. just like any other and all the moral theories will have views about how good it would be and IB can determine whether the agent should choose that option vs other options. In particular, if the agent is indifferent (as determined by IB) between 1. acquiring some moral information and paying $x and 2. not acquiring the information and paying nothing, then we can say that the value of the information to the agent is $x. Actually computing this will be hard because it will depend on all future decisions (as changing credences will change future bargaining power), but it’s possible in principle and I don’t think it’s substantially different to/harder than the value of moral information on MEC. However, I worry that IB might give quite an implausible account of the value of moral information, for some of the reasons you mention. Moral information that increases the agent’s credence in theory A will give theory A greater bargaining power in future decisions, so theory A will value such information. But if that information lowers theory B’s bargaining power, then theory B will be opposed to obtaining the information. It seems likely that the agent will problematically undervalue moral information in some cases. I haven’t thought through the details of this though.
I didn’t find the small vs grand worlds objection in Greaves and Cotton-Barratt (2019) very compelling and agree with your response. It seems to me to be analogous to the objections to utilitarianism based on the infeasibility of computing utilities in practice (which I don’t find very compelling).
On regress: perhaps I’m misunderstanding you, but this seems to me to be a universal problem in that we will always be uncertain about how we should make decisions under moral uncertainty. We might have credences in MFT, MEC and IB, but which of these (if any) should we use to decide what to do under uncertainty about what to do under moral uncertainty (and so on...)?
I think you have a typo in the table comparing MFT, MEC and IB: MEC shouldn’t be non-fanatical. Relatedly, my reading of Greaves and Cotton-Barratt (2019) is that IB is more robust to fanaticism but still recommends fanatical choices sometimes (and whether it does so in practice is an open question), so a tick here might be overly generous (though I agree that IB has an advantage over MEC here, to the extent that avoiding fanaticism is desirable).
One concern with IB that you don’t mention is that the NBS depends on a “disagreement point” but it’s not clear what this disagreement point should be. The disagreement point represents the utilities obtained if the bargainers fail to reach an agreement. I think the random dictator disagreement point in Greaves and Cotton-Barratt (2019) seems quite natural for many decision problems, but I think this dependence on a disagreement point counts against bargaining approaches.
I don’t think I understand the thinking here. It seems fairly natural to say “I am 80% confident in theory A, so that gets 80% of my resources, etc.”, and then to think about what would happen after that. It’s not intuitive to say “I am 80% confident in utilitarianism, that gets 80% ‘bargaining power’”. But I accept it’s an open question, if we want to do something internal bargaining, what the best version of that is.
I do mention the challenge of the disagreement point (see footnote 7). Again, I agree that this is the sort of thing that merits further inquiry. I’m not sold on the ‘random dictator point’, which, if I understood correctly, is identical to running a lottery where each theory has a X% chance of getting their top choice (where X% represents your credence in that theory). I note in part of section 2 that bargaining agents will likely think it preferable, by their own lights, to bargain over time rather than resolve things with lotteries. It’s for this reason I’m also inclined to prefer a ‘moral marketplace’ over a ‘moral parliament’: the former is what the sub-agents would themselves prefer.
Hello Aidan. Thanks for all of these, much food for thought. I’ll reply in individual comments to make this more manageable.
I didn’t really explain myself here, but there might be better vs worse regress problems. I haven’t worked out my thoughts enough yet to write something useful.
Agree the distinction would be tightened up. And yes, important bit seems to be whether agents will just ‘do their own thing’ vs consider moral trade (and moral ‘trade wars’)
I don’t really disagree. However, as I stated, my purpose was to give people ‘a feel’ for the view I doubt they would get from Greaves and Cotton-Barrett’s paper (and I certainly didn’t get when I did). The idea was to sketch a ‘quick-and-dirty’ version of the view to see if it was worth doing with greater precision.