I’m making a fresh comment to make some different points. I think our earlier thread has reached the limit of productive discussion.
I think your theory is best seen as a metanormative theory for aggregating both well-being of existing agents and the moral preferences of existing agents. There are two distinct types of value that we should consider:
prudential value: how good a state of affairs is for an agent (e.g. their level of well-being, according to utilitarianism; their priority-weighted well-being, according to prioritarianism).
moral value: how good a state of affairs is, morally speaking (e.g. the sum of total well-being, according to totalism; or the sum of total priority-weighted well-being, according to prioritarianism).
The aim of a population axiology is to determine the moral value of state of affairs in terms of the prudential value of the agents who exist in that state of affairs. Each agent can have a preference order on population axiologies, expressing their moral preferences.
We could see your theory as looking at the prudential of all the agents in a state of affairs (their level of well-being) and their moral preferences (how good they think the state of affairs is compared to other state of affairs in the choice set). The moral preferences, at least in part, determine the critical level (because you take into account moral intuitions, e.g. that the sadistic repugnant conclusion is very bad, when setting critical levels). So the critical level of an agent (on your view) expresses moral preferences of that agent. You then aggregate the well-being and moral preferences of agents to determine overall moral value—you’re aggregating not just well-being, but also moral preferences, which is why I think this is best seen as a metanormative theory.
Because the critical level is used to express moral preferences (as opposed to purely discounting well-being), I think it’s misleading and the source of a lot of confusion to call this a critical level theory—it can incorporate critical level theories if agents have moral preferences for critical level theories—but the theory is, or should be, much more general. In particular, in determining the moral preferences of agents, one could (and, I think, should) take normative uncertainty into account, so that the ‘critical level’ of an agent represents their moral preferences after moral uncertainty. Aggregating these moral preferences means that your theory is actually a two-level metanormative theory: it can (and should) take standard normative uncertainty into account in determining the moral preferences of each agent, and then aggregates moral preferences across agents.
Hopefully, you agree with this characterisation of your view. I think there are now some things you need to say about determining the moral preferences of agents and how they should be aggregated. If I understand you correctly, each agent in a state of affairs looks at some choice set of states of affairs (states of affairs that could obtain in the future, given certain choices?) and comes up with a number representing how good or bad the state of affairs that they are in is. In particular, this number could be negative or positive. I think it’s best just to aggregate moral preferences directly, rather than pretending to use critical levels that we subtract from levels of well-being, and then aggregate ‘relative utility’, but that’s not an important point.
I think the coice-set dependence of moral preferences is not ideal, but I imagine you’ll disagree with me here. In any case, I think a similar theory could specified that doesn’t rely on this choice-set dependence, though I imagine it might be harder to avoid the conclusions you aim to avoid, given choice-set independence. I haven’t thought about this much.
You might want to think more about whether summing up moral preferences is the best way to aggregate them. This form of aggregation seems vulnerable to extreme preferences that could dominate lots of mild preferences. I haven’t thought much about this and don’t know of any literature on this directly, but I imagine voting theory is very relevant here. In particular, the theory I’ve described looks just like a score voting method. Perhaps, you could place bounds on scores/moral preferences somehow to avoid the dominance of very strong preferences, but it’s not immediately clear to me how this could be done justifiably.
It’s worth noting that the resulting theory won’t avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you’re OK with that. I get the impression that you’re willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases.
I very much agree with these points you make.
About choice dependence: I’ll leave that up to every person for themselves. For example, if everyone strongly believes that the critical levels should be choice set independent, then fine, they can choose independent critical levels for themselves. But the critical levels indeed also reflect moral preferences, and can include moral uncertainty. So for example someone with a string credence in total utilitarianism might lower his or her critical level and make it choice set independent.
About the extreme preferences: I suggest people can choose a normalization procedure, such as variance normalization (cfr Owen-Cotton Barrett (http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf) and here: https://stijnbruers.wordpress.com/2018/06/06/why-i-became-a-utilitarian/
“It’s worth noting that the resulting theory won’t avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you’re OK with that. I get the impression that you’re willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases.” Indeed!
Great—I’m glad you agree!
I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven’t thought about this loads though, so this opinion is not super robust.
Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) credence in totalism for your conlusion to go through. In the actual world, I think x-risk still wins. As I suggested before, it could be the case that the value of x-risk mitigation is not that high or even negative due to s-risks (this might be your best line of argument for your conclusion), but this suggests prioritising large scale s-risks. You rightly pointed out that million years of WAS is the most concrete example of s-risk we currently have. It seems plausible that other and larger s-risks could arise in the future (e.g. large scale sentient simulations), which though admittedly speculative, could be really big in scale. I tend to think general foundational research aiming at improving the trajectory of the future is more valuable to do today than WAS mitigation. What I mean by ‘general foundational research’ is not entirely clear, but, for instance, thinking about and clarifying that seems more important than WAS mitigation.