I’m not sure what you mean by maximizing meta-expected value. How is this different from just maximizing expected value?
I’d claim that the additional uncertainty is unquantifiable, or at least no single set of precise probabilities (a single precise probability distribution over outcomes for each act) can be justified over all other alternatives. There’s sometimes no unique best attempt, and no uniquely best way to choose between them or weigh them. Sometimes there’s no uniform prior, and sometimes there are infinitely many competing candidates that might be called unform, because of different ways to parametrize your distribution. At the extreme for an idealized rational agent, you need to have a universal prior, but there are multiple, and they depend on arbitrary parametrizations. How do you pick one over all others?
I do think it’s possible we aren’t always clueless, depending on what kinds of credences you entertain.
I’m not sure what you mean by maximizing meta-expected value. How is this different from just maximizing expected value?
I’d claim that the additional uncertainty is unquantifiable, or at least no single set of precise probabilities (a single precise probability distribution over outcomes for each act) can be justified over all other alternatives. There’s sometimes no unique best attempt, and no uniquely best way to choose between them or weigh them. Sometimes there’s no uniform prior, and sometimes there are infinitely many competing candidates that might be called unform, because of different ways to parametrize your distribution. At the extreme for an idealized rational agent, you need to have a universal prior, but there are multiple, and they depend on arbitrary parametrizations. How do you pick one over all others?
I do think it’s possible we aren’t always clueless, depending on what kinds of credences you entertain.
FWIW, my preferred approach is something like this, although maybe we can go further: https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty
It builds on https://academic.oup.com/pq/article-abstract/71/1/141/5828678
Also this might be useful in some cases: https://forum.effectivealtruism.org/posts/f4sep8ggXEs37PBuX/even-allocation-strategy-under-high-model-ambiguity