This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn’t (by default) include a weighting amongst the set.
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you’re back to using a single probability function. I think that’s probably what you should do. But that sounds to me indistinguishable from the naive response.
The idea of a “precise probability function” is in general flawed. The whole point of a probability function is you don’t have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That’s the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.
You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.
For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.
We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn’t 10% or more.
You could model the distribution of your uncertainty with, say, a beta distribution of Beta(a=0.0001,b=100).
You might wonder, why b=100 and not b=200, or 101? It’s an arbitrary choice, right?
To which I have two responses:
You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.
While it is arbitrary, I claim that avoiding expected effects because we can’t make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there’s some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.