I think I basically agree with all your responses, but I also think this misses a more important case of cluelessness, specifically complex cluelessness. Saving a child has impacts on farmed animals, wild animals, economic growth and climate change, some of which could be negative and some of which could be positive. How do you weigh them all together non-arbitrarily to come to the verdict that it’s definitely good in expectation, or to the verdict that it’s definitely bad in expectation? This isn’t a case of having no reasons either way (or all reasons pointing in one direction), but of having important reasons each way that are too hard to weigh against one another in a way that’s justified, non-arbitrary and defensible.
It would also be surprising for the direct effects on the child to be a tie-breaker if you have precise probabilities, given how much more is at stake.
Seems natural to just go meta, treating the hard-to-assess determinants of expected value as akin to hard-to-discover empirical facts, and maximizing meta-expected value as one’s “best attempt” to manage this additional uncertainty.
I’m less sure about this, but it seems like the defense of EV against simple cluelessness could carry over to defend meta-EV against complex cluelessness? E.g. in the long run (and across relevant possible worlds), we’d expect these agents to do better on average than agents following any other subjectively-accessible decision procedure.
I’m not sure what you mean by maximizing meta-expected value. How is this different from just maximizing expected value?
I’d claim that the additional uncertainty is unquantifiable, or at least no single set of precise probabilities (a single precise probability distribution over outcomes for each act) can be justified over all other alternatives. There’s sometimes no unique best attempt, and no uniquely best way to choose between them or weigh them. Sometimes there’s no uniform prior, and sometimes there are infinitely many competing candidates that might be called unform, because of different ways to parametrize your distribution. At the extreme for an idealized rational agent, you need to have a universal prior, but there are multiple, and they depend on arbitrary parametrizations. How do you pick one over all others?
I do think it’s possible we aren’t always clueless, depending on what kinds of credences you entertain.
I think I basically agree with all your responses, but I also think this misses a more important case of cluelessness, specifically complex cluelessness. Saving a child has impacts on farmed animals, wild animals, economic growth and climate change, some of which could be negative and some of which could be positive. How do you weigh them all together non-arbitrarily to come to the verdict that it’s definitely good in expectation, or to the verdict that it’s definitely bad in expectation? This isn’t a case of having no reasons either way (or all reasons pointing in one direction), but of having important reasons each way that are too hard to weigh against one another in a way that’s justified, non-arbitrary and defensible.
It would also be surprising for the direct effects on the child to be a tie-breaker if you have precise probabilities, given how much more is at stake.
Seems natural to just go meta, treating the hard-to-assess determinants of expected value as akin to hard-to-discover empirical facts, and maximizing meta-expected value as one’s “best attempt” to manage this additional uncertainty.
I’m less sure about this, but it seems like the defense of EV against simple cluelessness could carry over to defend meta-EV against complex cluelessness? E.g. in the long run (and across relevant possible worlds), we’d expect these agents to do better on average than agents following any other subjectively-accessible decision procedure.
I’m not sure what you mean by maximizing meta-expected value. How is this different from just maximizing expected value?
I’d claim that the additional uncertainty is unquantifiable, or at least no single set of precise probabilities (a single precise probability distribution over outcomes for each act) can be justified over all other alternatives. There’s sometimes no unique best attempt, and no uniquely best way to choose between them or weigh them. Sometimes there’s no uniform prior, and sometimes there are infinitely many competing candidates that might be called unform, because of different ways to parametrize your distribution. At the extreme for an idealized rational agent, you need to have a universal prior, but there are multiple, and they depend on arbitrary parametrizations. How do you pick one over all others?
I do think it’s possible we aren’t always clueless, depending on what kinds of credences you entertain.
FWIW, my preferred approach is something like this, although maybe we can go further: https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty
It builds on https://academic.oup.com/pq/article-abstract/71/1/141/5828678
Also this might be useful in some cases: https://forum.effectivealtruism.org/posts/f4sep8ggXEs37PBuX/even-allocation-strategy-under-high-model-ambiguity