I haven’t gone through this whole post, but I generally like what I have seen.
I do want to advertise a recent paper I published on infinite ethics, suggesting that there are useful aggregative rules that can’t be represented by an overall numerical value, and yet take into account both the quantity of persons experiencing some good or bad and the probability of such outcomes: https://academic.oup.com/aristotelian/article-abstract/121/3/299/6367834
The resulting value scale is only a partial ordering, but I think it gets intuitive cases right, and is at least provably consistent, even if not complete. (I suspect that for infinite situations, we can’t get completeness in any interesting way without using the Axiom of Choice, and I think anything that needs the Axiom of Choice can’t give us any reason for why it rather than some alternative is the right one.)
I don’t think your argument against risk aversion fully addresses the issue. You give one argument for diversification that is based on diminishing marginal utilities, and then show that this plausibly doesn’t apply in global charities. However, there’s a separate argument for diversification that is actually about risk itself, and not diminishing marginal utility. You should look at Lara Buchak’s book, “Risk and Rationality”, which argues that there is a distinct form of rational risk-aversion (or risk-seeking-ness). On a risk neutral approach, each outcome counts in exact proportion to its probability, regardless of whether it’s the best outcome, the worst, or in between. On a risk averse approach, the relative weight of the top ten percentiles of outcomes is less than the relative weight of the bottom ten percentiles of outcomes, and vice versa for risk seeking approaches.
This turns out to precisely correspond to ways to make sense of some kinds of inequality aversion—making things better for a worse off person improves the world more than making things equally much better for a better off person.
None of the arguments you give tell against this approach rather than the risk-neutral one.
One important challenge to the risk-sensitive approach is that, if you make large numbers of uncorrelated decisions, then the law of large numbers kicks in and it ends up behaving just like risk neutral decision theory. But these cases of making a single large global-scale intervention are precisely the ones in which you aren’t making a large number of uncorrelated decisions, and so considerations of risk sensitivity can become relevant.