Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This is a fun paper. But it rests a lot on an unsupported intuition about what’s required in order to “take the depth of our uncertainty seriously” (i.e., that this requires imprecise credences with a very wide range of imprecision). Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.
I’m a bit surprised that this is getting downvoted, rather than just disagree-voted. It’s fine to reach a different verdict and all, but y’all really think the methodological point I’m making here shouldn’t even be said? Weird.
I didn’t downvote, but if I had, it would be because I don’t think it’s “surely false” “that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation”, and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.)
I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If you’re an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMF’s are good relative to more direct near term effects, unless FMF quietly acts to convince people to stop donating to AMF.)
And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.
Hi Richard,
Is this a fair comparison? For readers’ context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation:
I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.
Thanks for explaining!
It is a fair comparison. Andreas’ relevant claim is that it isn’t clear what the sign of the effect from AMF is. If AMF is negative, then its opposite—FMF—would presumably be positive.
Thanks for following up!
I am not sure about this. I think Andreas’ claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for “Minus AMF”, an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF.
The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths[1]. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMF’s indirect effects to be sufficiently negative for it to be overall net negative.
I am utilitarian, but recognise saving a life, and abstaining from saving a live can have different indirect consequences.
If you’re worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with “Minus AMF” in my original comment. (Or imagine stipulating away any such differences.) It doesn’t affect the essential point.
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think I’m most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper ‘Tough enough? Robust satisficing as a decision norm for long-term policy analysis’ but we weren’t especially sold on them.
Thanks, yeah, I remember liking that paper. Though I’m inclined to think you should assign (precise) higher-order probabilities to the various “admissible probability functions”, from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we don’t have any good way of dealing with imprecise credences, we probably shouldn’t regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
Another response could be to just look for more structure in our credences we’ve failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If we’re epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, we’ve actually failed to adequately capture our credences and their structure with these probability functions as they are.
One way to represent this is to have another probability function to mix all of those probability functions (“(precise) higher-order probabilities to the various “admissible probability functions”), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If you’re sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.
Of course, an alternative explanation could be that we aren’t actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.
Thanks for the summary, Nicholas. For reference, the paper was discussed on EA Forum here.
One move we can make to reduce paralysis with the maximality rule is to consider whole portfolios/sequences of actions, rather than acts in isolation. We can make up for the potential downsides of some acts with the upsides of others. See my post Hedging against deep and moral uncertainty.
Hi Andreas! I’m worried that the maximality rule will overgeneralize, implying that little is rationally required of us. Consider the decision whether to have children. There are obvious arguments both for and against from a self-interested point of view, and it isn’t clear exactly how to weigh them against each other. So, plausibly, having children will max EU according to at least one probability function in our representor, whereas not having children will max EU according to at least one other probability function in our representor. Result via maximality rule: either choice is rationally permissible. Or consider some interesting public policy problem from the perspective of a benevolent social planner. Given the murkiness of social science research, it seems like that, if we’ve gone in for the imprecise credence picture, no one policy will maximize EU relative to every credence function in the representor, in which case, many policy choices will be rationally permissible. I wonder if you have thoughts on this?
This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn’t (by default) include a weighting amongst the set.
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you’re back to using a single probability function. I think that’s probably what you should do. But that sounds to me indistinguishable from the naive response.
The idea of a “precise probability function” is in general flawed. The whole point of a probability function is you don’t have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That’s the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.
The concern motivating the use of imprecise probabilities is that you don’t always have a unique prior you’re justified in using to compare the plausibility of these distributions. In some cases you’ll find that any choice of unique prior, or unique higher-order distribution for aggregating priors, involves an arbitrary choice. (E.g., arbitrary weights assigned to conflicting intuitions about plausibility.)
You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.
For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.
We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn’t 10% or more.
You could model the distribution of your uncertainty with, say, a beta distribution of Beta(a=0.0001,b=100).
You might wonder, why b=100 and not b=200, or 101? It’s an arbitrary choice, right?
To which I have two responses:
You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.
While it is arbitrary, I claim that avoiding expected effects because we can’t make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there’s some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.