I like how the sequence engages with several kinds of uncertainties that one might have.
I had two questions:
1. Does the sequence assume a ‘good minus bad’ view, where independent bads (particularly, severe bads like torture-level suffering) can always be counterbalanced or offset by a sufficient addition of independent goods?
(Some of the main problems with this premise are outlined here, as part of a post where I explore what might be the most intuitive ways to think of wellbeing without it.)
2. Does the sequence assume an additive / summative / Archimedean theory of aggregation (i.e. that “quantity can always substitute for quality”), or does it also engage with some forms of lexical priority views (i.e. that “some qualities get categorical priority”)?
The links are to a post where I visualize and compare the aggregation-related ‘repugnant conclusions’ of different Archimedean and lexical views. (It’s essentially a response to Budolfson & Spears, 2018/2021, but can be read without having read them.) To me, the comparison makes it highly non-obvious whether Archimedean aggregation should be a default assumption, especially given points like those in my footnote 15, where I argue/point to arguments that a lexical priority view of aggregation need not, on a closer look, be implausible in theory nor practice:
it seems plausible to prioritize the reduction of certainly unbearable suffering over certainly bearable suffering (and over the creation of non-relieving goods) in theory. Additionally, such a priority is, at the practical level, quite compatible with an intuitive and continuous view of aggregation based on the expectedamount of lexically bad states that one’s decisions may influence (Vinding, 2022b, 2022e).
Thus, ‘expectational lexical minimalism’ need not be implausible in theory nor in practice, because in practice we always have nontrivial uncertainty about when and where an instance of suffering becomes unbearable. Consequently, we should still be sensitive to variations in the intensity and quantity of suffering-moments. Yet we need not necessarily formalize any part of our decision-making process as a performance of Archimedean aggregation over tiny intrinsic disvalue, as opposed to thinking in terms of continuous probabilities, and expected amounts, of lexically bad suffering.
The above questions/assumptions seem practically relevant for whether to prioritize (e.g.) x-risk reduction over the reduction of severe bads / s-risks. However, it seems to me that these questions are (within EA) often sidelined, not deeply engaged with, or are given strong implicit answers one way or another, without flagging their crucial relevance for cause prioritization.
Thus, for anyone who feels uncertain about these questions (i.e. resisting a dichotomous yes/no answer), it could be valuable to engage with them as additional kinds of uncertainties that one might have.
Hi Teo. Those are important uncertainties, but our sequences doesn’t engage with them. There’s only so much we could cover! We’d be glad to do some work in this vein in the future, contingent on funding. Thanks for raising these significant issues.
I like how the sequence engages with several kinds of uncertainties that one might have.
I had two questions:
1. Does the sequence assume a ‘good minus bad’ view, where independent bads (particularly, severe bads like torture-level suffering) can always be counterbalanced or offset by a sufficient addition of independent goods?
(Some of the main problems with this premise are outlined here, as part of a post where I explore what might be the most intuitive ways to think of wellbeing without it.)
2. Does the sequence assume an additive / summative / Archimedean theory of aggregation (i.e. that “quantity can always substitute for quality”), or does it also engage with some forms of lexical priority views (i.e. that “some qualities get categorical priority”)?
The links are to a post where I visualize and compare the aggregation-related ‘repugnant conclusions’ of different Archimedean and lexical views. (It’s essentially a response to Budolfson & Spears, 2018/2021, but can be read without having read them.) To me, the comparison makes it highly non-obvious whether Archimedean aggregation should be a default assumption, especially given points like those in my footnote 15, where I argue/point to arguments that a lexical priority view of aggregation need not, on a closer look, be implausible in theory nor practice:
The above questions/assumptions seem practically relevant for whether to prioritize (e.g.) x-risk reduction over the reduction of severe bads / s-risks. However, it seems to me that these questions are (within EA) often sidelined, not deeply engaged with, or are given strong implicit answers one way or another, without flagging their crucial relevance for cause prioritization.
Thus, for anyone who feels uncertain about these questions (i.e. resisting a dichotomous yes/no answer), it could be valuable to engage with them as additional kinds of uncertainties that one might have.
Hi Teo. Those are important uncertainties, but our sequences doesn’t engage with them. There’s only so much we could cover! We’d be glad to do some work in this vein in the future, contingent on funding. Thanks for raising these significant issues.