Bracketing is the most interesting recent idea in altruistic decision-making.
However, from the point of view of a sequential decision problem, an EV-maxing learner is more willing to make guesses about long-term, high-stakes effects and to learn from them, whereas a bracketing agent restricts itself to actions it is non-clueless about and may never learn about domains it brackets out–including the value of information itself. In this sense, bracketing seems to build a form of risk-aversion: it avoids speculative bets and forgoes opportunities to learn and adapt.
In practice, the main downside I see is that bracketers risk moving away from gaining influence over high-stakes uncertain problems and instead prioritize low-stakes neartermist interventions because of cluelessness about higher-stakes ones. As a result, bracketers may be overly conservative and lose influence relative to EV-maxers, who are more willing to make provisional guesses, risk being wrong, and update along the way.
I think bracketing agents could be moved to bracket out and ignore value of information sometimes and more often than EV-maxers, but it’s worth breaking things down further to see when. Imagine we’re considering an intervention with:
Direct effects on a group of moral patients (or locations of value), and we’re clueless about those effects.
Some (expected) value of information for another group of moral patients (possibly the same group, a disjoint group or intersecting the group in 1).
Then:
a. If the group in 2 is disjoint from the group in 1, then we can bracket out those affected in 1 and decide just on the basis of the expected value of information in 2 (and opportunity costs).
b. If the group in 2 is a subset of the group in 1, then the minimum expected value of information needs to be high enough to overcome the potential expected worst case downsides from the direct effects on the group in 1, for the intervention to beat doing nothing. The VOI can get bracketed away and ignored along with the direct effects in 1.
And there are intermediate cases, with probably intermediate recommendations.
Bracketing is the most interesting recent idea in altruistic decision-making.
However, from the point of view of a sequential decision problem, an EV-maxing learner is more willing to make guesses about long-term, high-stakes effects and to learn from them, whereas a bracketing agent restricts itself to actions it is non-clueless about and may never learn about domains it brackets out–including the value of information itself. In this sense, bracketing seems to build a form of risk-aversion: it avoids speculative bets and forgoes opportunities to learn and adapt.
In practice, the main downside I see is that bracketers risk moving away from gaining influence over high-stakes uncertain problems and instead prioritize low-stakes neartermist interventions because of cluelessness about higher-stakes ones. As a result, bracketers may be overly conservative and lose influence relative to EV-maxers, who are more willing to make provisional guesses, risk being wrong, and update along the way.
(Edited to elaborate.)
I think bracketing agents could be moved to bracket out and ignore value of information sometimes and more often than EV-maxers, but it’s worth breaking things down further to see when. Imagine we’re considering an intervention with:
Direct effects on a group of moral patients (or locations of value), and we’re clueless about those effects.
Some (expected) value of information for another group of moral patients (possibly the same group, a disjoint group or intersecting the group in 1).
Then:
a. If the group in 2 is disjoint from the group in 1, then we can bracket out those affected in 1 and decide just on the basis of the expected value of information in 2 (and opportunity costs).
b. If the group in 2 is a subset of the group in 1, then the minimum expected value of information needs to be high enough to overcome the potential expected worst case downsides from the direct effects on the group in 1, for the intervention to beat doing nothing. The VOI can get bracketed away and ignored along with the direct effects in 1.
And there are intermediate cases, with probably intermediate recommendations.