It also looks like youâre actually >50% downside-focused, conditional on strong longtermism, just before suffering-focused views.
I donât think this is quite right.
The simplest reason is that I think suffering-focused does not necessarily imply downside-focused. Just as a (weakly) non-suffering-focused person could still be downside-focused for empirical reasons (basically the scale, tractability, and neglectedness of s-risks vs other x-risks), a (weakly) suffering-focused person could still focus on non-s-risk x-risks for empirical reasons. This is partly because one can have credence in a weakly suffering-focused view, and partly because one can have a lot of uncertainty between suffering-focused views and other views.
As I note in the spreadsheet, if I lost all credence in the claim that I shouldnât be subscribe to suffering-focused ethics, Iâd:
Probably work on s-risks or s-risk factors (rather than other x-risks or x-risk factorsâthough note that these categories can overlap).
But maybe still work on non-s-risk x-risks, such as extinction risks. (This depends on how much more important Iâd now believe reducing suffering is; the importance, tractability, and neglectedness of specific intervention options; and my comparative advantage.) [emphasis added]
That said, it is true that Iâm probably close to 50% downside-focused. (Itâs even possible that itâs over 50% - I just think that the spreadsheet alone doesnât clearly show that.)
And, relatedly, thereâs a substantial chance that in future Iâll focus on actions somewhat tailored to reducing s-risks, most likely by researching authoritarianism & dystopias or broad risk factors that might be relevant both to s-risks and other x-risks. (Though, to be clear, thereâs also a substantial chunk of why I see authoritarianism/âdystopias as bad that isnât about s-risks.)
This all speaks to a weakness of the spreadsheet, which is that it just shows one specific conjunctive set of claims that can lead me to my current bottom-line stance. This makes my current stance seems less justified-in-my-own-view than it really is, because I havenât captured other possible paths that could lead me to it (such as being suffering-focused but thinking s-risks are far less likely or tractable than other x-risks).
And another weakness is simply that these claims are fuzzy and that I place fairly little faith in my credences even strongly reflecting my own views (let alone being âreasonableâ). So one should be careful simply multiplying things together. That said, I do think that doing so can be somewhat fruitful and interesting.
It seems confusing for a view thatâs suffering-focused not to commit you (or at least the part of your credence thatâs suffering-focused, which may compromise with other parts) to preventing suffering as a priority. I guess people include weak NU/ânegative-leaning utilitarianism/âprioritarianism in (weakly) suffering-focused views.
What would count as weakly suffering-focused to you? Giving 2x more weight to suffering than you would want to in your personal tradeoffs? 2x more weight to suffering than pleasure at the same âobjective intensityâ? Even less than 2x?
FWIW, I think a factor of 2 is probably within the normal variance of judgements about classical utilitarian pleasure-suffering tradeoffs, and there probably isnât any objective intensity or at least it isnât discoverable, so such a weakly suffering-focused view wouldnât really be distinguishable from classical utilitarianism (or a symmetric total view with the same goods and bads).
It sounds like part of what youâre saying is that itâs hard to say what counts as a âsuffering-focused ethical viewâ if we include views that are pluralistic (rather than only caring about suffering), and that part of the reason for this is that itâs hard to know what âcommon unitâ we could use for both suffering and other things.
I agree with those things. But I still think the concept of âsuffering-focused ethicsâ is useful. See the posts cited in my other reply for some discussion of these points (I imagine youâve already read them and just think that they donât fully resolve the issue, and I think youâd be right about that).
What would count as weakly suffering-focused to you? Giving 2x more weight to suffering than you would want to in your personal tradeoffs? 2x more weight to suffering than pleasure at the same âobjective intensityâ? Even less than 2x?
I think this question isnât quite framed rightâit seems to assume that the only suffering-focused view we have in mind is some form of negative utilitarianism, and seems to ignore population ethics issues. (Iâm not saying you actually think that SFE is just about NU or that population ethics isnât relevant, just that that text seems to imply that.)
E.g., an SFE view might prioritise suffering-reduction not exactly because it gives more weight to suffering than pleasure in normal decision situations, but rather because it endorses âthe asymmetryâ.
But basically, I guess Iâd count a view as weakly suffering-focused if, in a substantial number of decision situations I care a lot about (e.g., career choice), it places noticeably âmoreâ importance on reducing suffering by some amount than on achieving other goals âto a similar amountâ. (Maybe âto a similar amountâ could be defined from the perspective of classical utilitarianism.) This is of course vague, and that definition is just one Iâve written now rather than this being something Iâve focused a lot of time on. But it still seems a useful concept to have.
(Minor point: âpreventing suffering as a priorityâ seems quite different from âdownside-focusedâ. Maybe you meant âas the priorityâ?)
I think my way of thinking about this is very consistent with what I believe are the âcanonicalâ works on âsuffering-focused ethicsâ and âdownside-focused viewsâ. (I think these may have even been the works that introduced those terms, though the basic ideas preceded the works.) Namely:
Suffering-focused ethics is an umbrella term for moral views that place primary or particular importance on the prevention of suffering. Most views that fall into this category arepluralistic in that they hold that other things besides reducing suffering also matter morally [emphasis added]
And the latter says:
Whether a normative view qualifies as downside-focused or upside-focused is not always easy to determine, as the answer can depend on difficult empirical questions such as how much disvalue we can expect to be able to reduce versus how much value we can expect to be able to create.[...] The following commitments may lead to a downside-focused prioritization:
(Non-welfarist) views that include considerations about suffering prevention or the prevention of rights violations as a prior or as (central) part of an objective list of what constitutes goodness [emphasis added]
I donât think this is quite right.
The simplest reason is that I think suffering-focused does not necessarily imply downside-focused. Just as a (weakly) non-suffering-focused person could still be downside-focused for empirical reasons (basically the scale, tractability, and neglectedness of s-risks vs other x-risks), a (weakly) suffering-focused person could still focus on non-s-risk x-risks for empirical reasons. This is partly because one can have credence in a weakly suffering-focused view, and partly because one can have a lot of uncertainty between suffering-focused views and other views.
As I note in the spreadsheet, if I lost all credence in the claim that I shouldnât be subscribe to suffering-focused ethics, Iâd:
That said, it is true that Iâm probably close to 50% downside-focused. (Itâs even possible that itâs over 50% - I just think that the spreadsheet alone doesnât clearly show that.)
And, relatedly, thereâs a substantial chance that in future Iâll focus on actions somewhat tailored to reducing s-risks, most likely by researching authoritarianism & dystopias or broad risk factors that might be relevant both to s-risks and other x-risks. (Though, to be clear, thereâs also a substantial chunk of why I see authoritarianism/âdystopias as bad that isnât about s-risks.)
This all speaks to a weakness of the spreadsheet, which is that it just shows one specific conjunctive set of claims that can lead me to my current bottom-line stance. This makes my current stance seems less justified-in-my-own-view than it really is, because I havenât captured other possible paths that could lead me to it (such as being suffering-focused but thinking s-risks are far less likely or tractable than other x-risks).
And another weakness is simply that these claims are fuzzy and that I place fairly little faith in my credences even strongly reflecting my own views (let alone being âreasonableâ). So one should be careful simply multiplying things together. That said, I do think that doing so can be somewhat fruitful and interesting.
It seems confusing for a view thatâs suffering-focused not to commit you (or at least the part of your credence thatâs suffering-focused, which may compromise with other parts) to preventing suffering as a priority. I guess people include weak NU/ânegative-leaning utilitarianism/âprioritarianism in (weakly) suffering-focused views.
What would count as weakly suffering-focused to you? Giving 2x more weight to suffering than you would want to in your personal tradeoffs? 2x more weight to suffering than pleasure at the same âobjective intensityâ? Even less than 2x?
FWIW, I think a factor of 2 is probably within the normal variance of judgements about classical utilitarian pleasure-suffering tradeoffs, and there probably isnât any objective intensity or at least it isnât discoverable, so such a weakly suffering-focused view wouldnât really be distinguishable from classical utilitarianism (or a symmetric total view with the same goods and bads).
It sounds like part of what youâre saying is that itâs hard to say what counts as a âsuffering-focused ethical viewâ if we include views that are pluralistic (rather than only caring about suffering), and that part of the reason for this is that itâs hard to know what âcommon unitâ we could use for both suffering and other things.
I agree with those things. But I still think the concept of âsuffering-focused ethicsâ is useful. See the posts cited in my other reply for some discussion of these points (I imagine youâve already read them and just think that they donât fully resolve the issue, and I think youâd be right about that).
I think this question isnât quite framed rightâit seems to assume that the only suffering-focused view we have in mind is some form of negative utilitarianism, and seems to ignore population ethics issues. (Iâm not saying you actually think that SFE is just about NU or that population ethics isnât relevant, just that that text seems to imply that.)
E.g., an SFE view might prioritise suffering-reduction not exactly because it gives more weight to suffering than pleasure in normal decision situations, but rather because it endorses âthe asymmetryâ.
But basically, I guess Iâd count a view as weakly suffering-focused if, in a substantial number of decision situations I care a lot about (e.g., career choice), it places noticeably âmoreâ importance on reducing suffering by some amount than on achieving other goals âto a similar amountâ. (Maybe âto a similar amountâ could be defined from the perspective of classical utilitarianism.) This is of course vague, and that definition is just one Iâve written now rather than this being something Iâve focused a lot of time on. But it still seems a useful concept to have.
(Minor point: âpreventing suffering as a priorityâ seems quite different from âdownside-focusedâ. Maybe you meant âas the priorityâ?)
I think my way of thinking about this is very consistent with what I believe are the âcanonicalâ works on âsuffering-focused ethicsâ and âdownside-focused viewsâ. (I think these may have even been the works that introduced those terms, though the basic ideas preceded the works.) Namely:
https://ââlongtermrisk.org/ââthe-case-for-suffering-focused-ethics/ââ
https://ââforum.effectivealtruism.org/ââposts/ââ225Aq4P4jFPoWBrb5/ââcause-prioritization-for-downside-focused-value-systems
The former opens with:
And the latter says:
I think another good post on this is Descriptive Population Ethics and Its Relevance for Cause Prioritization, and that that again supports the way Iâm thinking about this. (But to save time /â be lazy, I wonât mine it for useful excepts to share here.)