Linkpost for various recent essays on suffering-focused ethics, priorities, and more

Link post

The following are (links to) various essays that I have published over the last few months. Some of the essays have been published on the website of the Center for Reducing Suffering (CRS), and some of them have been published on my own blog.

CRS essays

A phenomenological argument against a positive counterpart to suffering

Various views deny that suffering has a positive counterpart. Proponents of such views often pursue a line of argument that focuses on the prevalence of subtle frustrations and bothersome sensations. That is, when we typically think that we are in a neutral state, and we claim that some pleasure takes us above that neutral state, what we are experiencing is really a subtly bothered and unsatisfied state that becomes (somewhat) relieved of its commonly overlooked unpleasant features (see e.g. Sherman, 2017, pp. 103-107; Gloor, 2017, sec. 2.1; Knutsson, 2022, sec. 6).

This essay will pursue a different line of argument. Rather than focusing on unpleasant states, and arguing for their subtle omnipresence, my aim here is instead to zoom in on the purportedly positive side. I will argue that purportedly positive experiences do not possess any property that renders them genuine opposites of painful and uncomfortable experiences, neither in phenomenological nor axiological terms.

Reply to the “evolutionary asymmetry objection” against suffering-focused ethics

An objection that is sometimes raised against suffering-focused ethics is that our intuitions about the relative value of suffering and happiness are skewed toward the negative for evolutionary reasons, and hence we cannot trust our intuition that says that the reduction of suffering is more valuable and more morally important than the creation of happiness. My aim in this post is to reply to this objection.

Reply to the scope neglect objection against value lexicality

Some views hold that no amount of mild discomfort can be worse than a single instance of extreme suffering (i.e. they endorse value lexicality between extreme suffering and mild discomfort). An objection to such views is that they are biased by scope neglect — our tendency to disregard the number of affected beings in our evaluations of a problem. Since we cannot comprehend the badness of a vast amount of mild discomfort, the objection goes, we cannot trust our intuitive assessment that extreme suffering is worse than any amount of mild discomfort. My aim in this brief post is to reply to this objection.

Comments on Mogensen’s “The weight of suffering”

Andreas Mogensen’s paper “The weight of suffering” presents an interesting argument in favor of the axiological position that “there exists some depth of suffering that cannot be compensated for by any measure of well-being” — a position he calls “LTNU” (Mogensen, 2022, abstract). Mogensen then proceeds to explore how one might respond to that argument and thereby reject LTNU.

My aim in this post is to raise some critical points in response to this paper. As a preliminary note, I should say that I commend Mogensen for taking up this crucial issue regarding the weight of suffering, and for exploring it in an open-ended manner.

Reply to Chappell’s “Rethinking the Asymmetry”

My aim in this post is to respond to the arguments presented in Richard Yetter Chappell’s “Rethinking the Asymmetry”. Chappell argues against the Asymmetry in population ethics, which roughly holds that the addition of bad lives makes the world worse, whereas the addition of good lives does not make the world better (other things being equal).

A thought experiment that questions the moral importance of creating happy lives

Many people have the intuition that extinction would be bad. A problem, however, is that the term “extinction” carries many different connotations, and extinction may be considered bad for many different reasons. For instance, an extinction scenario might be considered bad because it involves frustrated preferences, violations of consent, or lethal violence. Yet extinction scenarios need not involve any of these elements in principle. By considering thought experiments that involve extinction without involving any of the elements listed above, we can get a better sense of what might explain the intuition that extinction would be bad. In this post, I will present a thought experiment that casts doubt on the notion that extinction would be bad or morally objectionable because it would prevent the creation of future happy lives.

Lexical priority to extreme suffering — in practice

Some ethical views grant a lexical priority to the prevention of extreme suffering over mild forms of suffering, meaning that the prevention of extreme suffering takes precedence over the prevention of mild suffering.

Such views have been claimed to have implausible practical implications. For instance, one objection is that such a lexical priority implies that we should neglect all endeavors that do not aim directly at the reduction of extreme suffering. My goal in this post is to reply to a couple of these objections, and to clarify some key aspects regarding how one might think about prioritization in light of lexical views.

Personal blog essays

Reasons to include insects in animal advocacy

I have seen some people claim that animal activists should primarily be concerned with certain groups of numerous vertebrates, such as chickens and fish, whereas we should not be concerned much, if at all, with insects and other small invertebrates. (See e.g. here.) I think there are indeed good arguments in favor of emphasizing chickens and fish in animal advocacy, yet I think those same arguments tend to support a strong emphasis on helping insects as well. My aim in this post is to argue that we have compelling reasons to include insects and other small vertebrates in animal advocacy.

The catastrophic rise of insect farming and its implications for future efforts to reduce suffering

On the 17th of August 2021, the EU authorized the use of insects as feed for farmed animals such as chickens and pigs. This was a disastrous decision for sentient beings, as it may greatly increase the number of beings who will suffer in animal agriculture. Sadly, this was just one in a series of disastrous decisions that the EU has made regarding insect farming in the last couple of years. Most recently, in February 2022, they authorized the farming of house crickets for human consumption, after having made similar decisions for the farming of mealworms and migratory locusts in 2021.

Many such catastrophic decisions probably lie ahead, seeing that the EU is currently reviewing applications for the farming of nine additional kinds of insects. This brief posts reviews some reflections and potential lessons in light of these harmful legislative decisions.

Beware underestimating the probability of very bad outcomes: Historical examples against future optimism

It may be tempting to view history through a progressive lens that sees humanity as climbing toward ever greater moral progress and wisdom. As the famous quote popularized by Martin Luther King Jr. goes: “The arc of the moral universe is long, but it bends toward justice.”

Yet while we may hope that this is true, and do our best to increase the probability that it will be, we should also keep in mind that there are reasons to doubt this optimistic narrative. For some, the recent rise of right-wing populism is a salient reason to be less confident about humanity’s supposed path toward ever more compassionate and universal values. But it seems that we find even stronger reasons to be skeptical if we look further back in history. My aim in this post is to present a few historical examples that in my view speak against confident optimism regarding humanity’s future.

Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies

Our uncertainty about how the future will unfold is vast, especially on long timescales. In light of this uncertainty, it may be natural to think that our uncertainty about strategies must be equally vast and intractable. My aim in this brief post is to argue that this is not the case.

What does a future dominated by AI imply?

Among altruists working to reduce risks of bad outcomes due to AI, I sometimes get the impression that there is a rather quick step from the premise “the future will be dominated by AI” to a practical position that roughly holds that “technical AI safety research aimed at reducing risks associated with fast takeoff scenarios is the best way to prevent bad AI outcomes”.

I am not saying that this is the most common view among those who work to prevent bad outcomes due to AI. Nor am I saying that the practical position outlined above is necessarily an unreasonable one. But I think I have seen (something like) this sentiment assumed often enough for it to be worthy of a critique. My aim in this post is to argue that there are many other practical positions that one could reasonably adopt based on that same starting premise.

Why I don’t prioritize consciousness research

For altruists trying to reduce suffering, there is much to be said in favor of gaining a better understanding of consciousness. Not only may it lead to therapies that can mitigate suffering in the near term, but it may also help us in our large-scale prioritization efforts. For instance, clarifying which beings can feel pain is important for determining which causes and interventions we should be working on to best reduce suffering.

These points notwithstanding, my own view is that advancing consciousness research is not among the best uses of marginal resources for those seeking to reduce suffering. My aim in this post is to briefly explain why I hold this view.

No comments.