Solution to the two envelopes problem for moral weights

Summary

When taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.

  1. For welfare comparisons and prioritization between different moral patients like humans, other animals, aliens and artificial systems, I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value. Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty (more).

  2. I revise the account in light of the possibility of multiple different human reference points between which we don’t have fixed uncertainty-free comparisons of value, like pleasure vs belief-like preferences (cognitive desires) vs non-welfare moral reasons, or specific instances of these. If and because whatever moral reasons we apply to humans, (similar or other) moral reasons aren’t too unlikely to apply with a modest fraction of the same force to other animals, then the results would still be relatively animal-friendly (more).

    1. I outline why this condition plausibly holds across moral reasons and theories, so that it’s plausible we should be fairly animal-friendly (more).

  3. I describe and respond to some potential objections:

    1. There could be inaccessible or unaccessed conscious subsystems in our brains that our direct experiences and intuitions do not (adequately) reflect, and these should be treated like additional moral patients (more).

    2. The approach could lead to unresolvable disagreements between moral agents, but this doesn’t seem any more objectionable than any other disagreement about what matters (more).

    3. Epistemic modesty about morality may push for also separately normalizing by the values of nonhumans or against these comparisons altogether, but this doesn’t seem to particularly support the prioritization of humans (more).

  4. I consider whether similar arguments apply in cases of realism vs illusionism about phenomenal consciousness, moral realism vs moral antirealism, and person-affecting views vs total utilitarianism, and find them less compelling for these cases, because value may be grounded on fundamentally different things (more).

How this work has changed my mind: I was originally very skeptical of intertheoretic comparisons of value/​reasons in general, including across theories of consciousness and the scaling of welfare and moral weights between animals, because of the two envelopes problem (Tomasik, 2013-2018) and the apparent arbitrariness involved. This lasted until around December 2023, and some arguments here were originally going to be part of a piece strongly against such comparisons for cross-species moral weights, which I now respond to here along with positive arguments for comparisons.

Acknowledgements

I credit Derek Shiller and Adam Shriver for the idea of treating the problem like epistemic uncertainty relative to what we experience directly. I’d also like to thank Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.

Background

On the allocation between the animal-inclusive and human-centric near-termist views, specifically, Karnofsky (2018) raised a problem:

The “animal-inclusive” vs. “human-centric” divide could be interpreted as being about a form of “normative uncertainty”: uncertainty between two different views of morality. It’s not entirely clear how to create a single “common metric” for adjudicating between two views. Consider:

  • Comparison method A: say that “a human life improved” is the main metric valued by the human-centric worldview, and that “a chicken life improved” is worth >1% of these (animal-inclusive view) or 0 of these (human-centric view). In this case, a >10% probability on the animal-inclusive view would lead chickens to be valued >0.1% as much as humans, which would likely imply a great deal of resources devoted to animal welfare relative to near-term human-focused causes.

  • Comparison method B: say that “a chicken life improved” is the main metric valued by the animal-inclusive worldview, and that “a human life improved” is worth <100 of these (animal-inclusive view) or an astronomical number of these (human-centric view). In this case, a >10% probability on the human-[centric] view would be effectively similar to a 100% probability on the human-centric view.

These methods have essentially opposite practical implications. Method A is the more intuitive one for me (it implies that the animal-inclusive view sees “more total value at stake in the world as a whole,” and this implication seems correct), but the lack of a clear principle for choosing between the two should give one pause, and there’s no obviously appropriate way to handle this sort of uncertainty. One could argue that the two views are “philosophically incommensurable” in the sense of dealing with fundamentally different units of value, with no way to identify an equivalence-based conversion factor between the two.[1]

For example, if one thinks there’s a 50% chance that one should be weighing the interests of chickens 1% as much as those of humans, and a 50% chance that one should not weigh them at all, one might treat this situation as though chickens have an “expected moral weight” of 0.5% (50% * 1% + 50% * 0) relative to humans. This would imply that (all else equal) a grant that helps 300,000 chickens is better than a grant that helps 1,000 humans, while a grant that helps 100,000 chickens is worse.

Credit: DALL·E

We can define random variables to capture these statements more precisely via a formalization with expected values. Let denote the (average or marginal) moral value per human life improved by some intervention, and let denote the (average or marginal) moral value per chicken life improved by another intervention. Then,

  1. Method A could follow from assuming is constant and calculating the expected value per chicken life improved as . Indeed, if is assumed constant, then . Furthermore, under linear views like utilitarianism, if is constant, we can normalize the value of all interventions by it, across species, and so we can just value the chicken welfare improvements proportionally to .

  2. Method B could follow from assuming is constant and calculating the expected value per human life improved as , or, after normalizing by , .

Based on Karnofsky’s example, we could take to be 1% with probability 50% and (approximately) 0 otherwise, and to be 100 (100=1/​(1%)) with probability 50% and astronomical (possibly infinite) otherwise. If is never 0, then and are multiplicative inverses of one another this way, i.e. . However, , while is astronomical or infinite, and . In general, as long as is defined, non-negative and not constant.[2] The fact that these two expected values of ratios aren’t inverses of one another is why the two methods give different results for prioritization.

Rather than specific welfare improvements in particular, and could denote welfare ranges, i.e. the difference between the maximum welfare at a time and the minimum welfare at a time of the average chicken or average human, respectively. Or, they may be the “moral weight” of the average chicken or the average human, respectively, as multipliers by which to weigh measures of welfare. We may let denote the moral value per unit of a human welfare improvement according to a measure of human welfare, like DALYs, QALYs, or measures of life satisfaction, and let denote the moral value of per unit of chicken welfare improvement according to a measure of chicken welfare.[3] See Fischer, 2022 and Rethink Priorities’ Moral Weight Project Sequence for further discussion of welfare ranges, capacities for welfare and moral weights.

This problem has been called the two envelopes problem, in analogy with the original two envelopes problem (Tomasik, 2013-2018, Tomasik et al., 2009-2014). I use Karnofsky (2018)’s framing because of its more explicit connection to effective altruist cause prioritization.

I make a case here that we should fix and normalize by the (or a) human moral weight, using something like comparison method A, with some caveats and adjustments.

Welfare in human-relative terms

The strengths of our reasons to reduce human suffering or satisfy human belief-like preferences, say, don’t typically seem to depend on our understanding of their empirical or descriptive nature. This is not how we actually do ethics. If we found out more about the nature of consciousness and suffering, which we define in human terms, we typically wouldn’t decide it mattered less (or more) than we thought before.[4] Finding out that pleasure is mediated not by dopamine or serotonin but by a separate system, or that humans only have around 86 billion neurons instead of 100 billion doesn’t change how important our own experiences directly seem to us. Nor does changing our confidence between the various theories of consciousness.

Instead, we directly value our experiences, not our knowledge of what exactly generates them. Water didn’t become more or less important to human life from finding out it was H2O.[5] The ultimate causes of why we care about something may depend on its precise empirical or descriptive nature, but the proximal reasons — for example, how suffering feels to us and how bad it feels to us, say — do not change with our understanding of its nature. One might say we know (some of) these reasons by direct experience.[6] My own suffering just directly seems bad to me,[7] and how bad it directly seems does not depend on my beliefs about theories of consciousness or about how many neurons we have.

And, in fact, on utilitarian views using subjective theories of welfare like hedonism, desire theories and preference views, how bad my suffering actually (directly) is for me on those theories plausibly should just be how bad my suffering (directly) seems to me.[8] In that case, uncertainty about the nature of these “seemings” or appearances and how they arise and their extent in other animals is just descriptive uncertainty, like uncertainty about the nature and prevalence of any other physical or biological phenomenon, like gravity or cancer.[9] This is not a problem of comparisons of reasons across moral theories or moral uncertainty. It’s a problem of comparisons of reasons across theories of the empirical or descriptive nature of the things to which we assign moral value. There is, however, still moral uncertainty in deciding between hedonism, desire theories, preference views and objective list theories, and between variants of each, among other things.

Despite later warning about two-envelopes effects in Muehlhauser, 2018, one of Muehlhauser (2017)’s illustration of how he understands moral patienthood is based on his own direct experience of pain:

What are the implications of illusionism for my intuitions about moral patienthood? In one sense, there might not be any.360 After all, my intuitions about (e.g.) the badness of conscious pain and the goodness of conscious pleasure were never dependent on the “reality” of specific features of consciousness that the illusionist thinks are illusory. Rather, my moral intuitions work more like the example I gave earlier: I sprain my ankle while playing soccer, don’t notice it for 5 seconds, and then feel a “rush of pain” suddenly “flood” my conscious experience, and I think “Gosh, well, whatever this is, I sure hope nothing like it happens to fish!” And then I reflect on what was happening prior to my conscious experience of the pain, and I think “But if that is all that happens when a fish is physically injured, then I’m not sure I care.” And so on.

It’s the still poorly understood “whatever this is”, i.e. his direct experience, and things “like it” that are of fundamental moral importance and for which he’s looking in other animals. Conscious pain according to specific theories are just designed to track “whatever this is” and things “like it”, but almost all theories will be wrong. The example also seems best interpretable as an illustration of comparison method A, weighing fish pain relative to his experience of pain spraining an ankle.

The relevant moral reasons are or derive directly from these direct experiences or appearances, and the question is just when, where (what animals and other physical systems) and to what extent these same (kinds of) appearances and resulting reasons apply. Whatever this is that we’re doing, to what extent do others do it or something like it, too? All of our views and theories of the value of welfare should already be or should be made human-relative, because the direct moral reasons we have to apply all come from our own individual experiences and modest extensions, e.g. assuming our experiences are similar to other humans’. As we find out more about other animals and the nature of human welfare, our judgements about where other animals stand in relation to our concept and direct impressions of human welfare — the defining cases — can change.

So I claim that we have direct access to the grounds for the disvalue of human suffering and human moral value, i.e. the variable in the previous section, and we understand the suffering and moral value of other beings, including the (dis)value in chickens as above, relative to humans. Because of this, we can fix and use comparison method A, at least across some theories, including at least separately across theories of the nature of unpleasantness, across theories of the nature of felt desires, and across theories of the nature of belief-like preferences.

On the other hand, it doesn’t make much sense for us to fix the moral value of chicken suffering or the chicken moral weight, because we (or you, the reader) only understand it in human-relative terms, and especially in reference to our (respectively, your) own experiences.[10]

And it could end up being the case — i.e. with nonzero probability — that chickens don’t matter at all, not even infinitesimally. They may totally lack the grounds to which we assign moral value, e.g. they may not be capable of suffering at all, even though I take it to be quite likely that they can suffer, or moral status could depend on more than suffering. Then, we aren’t even fixing the moral weight of a chicken at all, if it can be 0 with nonzero probability and nonzero with nonzero probability. And because of the possible division by 0 moral weight, the expected moral weights of humans and all other animals will be infinite or undefined.[11] It seems such a view wouldn’t be useful for guiding action.[12]

Similarly, we wouldn’t normalize by the moral weights of any other animals, artificial systems, plants or rocks.

We have the most direct access to (some) human moral reasons, can most reliably understand (some of) them and so typically theorize morally relative to (some of) them. How we handle uncertainty should reflect these facts.

Finding common ground

How intense or important suffering is could be quantified differently across theories, both empirical theories and moral theories. In some cases, there will be foundational metaphysical claims inherent to those theories that could ground comparisons between the theories. In many or even most important cases, there won’t be.

What common metaphysical facts could ground intertheoretic comparisons of value or reasons across theories of consciousness as different as Integrated Information Theory, Global Workspace Theory and Attention Schema Theory? Under their standard intended interpretations, they have radically different and mutually exclusive metaphysical foundations — or basic building blocks —, and each of these foundations is false, except possibly one. Similarly, there are very different and mutually exclusive proposals to quantify the empirical intensity of welfare and moral weights, like counting just-noticeable differences, functions of the number of relevant (firing) neurons or cognitive sophistication, direct subjective intrapersonal weighing, among others (e.g. Fischer, 2023 with model descriptions in the tabs of this sheet). How do the numbers of relevant neurons relate to the number of just-noticeable differences across all possible minds, not just humans? There’s nothing clearly inherent to these accounts that would ground intertheoretic comparisons between them, at least given our current understanding. But we can look outside the contents of the theories themselves to the common facts they’re designed to explain.

When ice seemed like it could have turned out to be something other than the solid phase of water, we would be comparing the options based on the common facts — the evidence or data — the different possibilities were supposed to explain. And then by finding out that ice is water, you learn that there is much more water in the world, because you would then also have to count all the ice on top of all the liquid water.[13] If your moral theory took water to be intrinsically good and more of it to be better, this would be good news (all else equal).

For moral weights across potential moral patients, the common facts our theories are designed to explain are those in human experiences, our direct impressions and intuitions, like how bad suffering feels or appears to be to us. It’s these common facts that can be used to ground intertheoretic comparisons of value or reasons, and it’s these common facts or similar ones for which we want to check in other beings or systems. So, we can hold the strengths of reasons from these common facts constant across theories, if and because they ground value directly on these common facts in the same way, e.g. the same hedonistic utilitarianism under different theories of (conscious) pleasure and unpleasantness, or the same preference utilitarianism under different theories of belief-like preferences. And in recognizing animal consciousness, like finding out that ice is water, you could come to see the same kind of empirical facts and therefore moral value in some other animals, finding more of it in the world.

Multiple possible reference points

However, things aren’t so simple as fixing the human moral weight across theories. We should be unsure about that, too. Perhaps a given instance of unpleasantness matters twice as much as another given belief-like preference, or perhaps it matters half as much, with 50% each. We get the two envelopes problem here, too. If we were to fix the value of the unpleasantness, then the belief-like preference would have an expected value of 50%*0.5 + 50%*2 = 1.25 times as great of the value of the unpleasantness. If we were to fix the value of the belief-like preference, then the unpleasantness would have an expected value of 50%*0.5 + 50%*2 = 1.25 times as great of the value of the belief-like preference.

We’re uncertain about which theory of wellbeing is correct and how to weigh human unpleasantness vs human pleasure vs human felt desires vs human belief-like preferences vs human choices vs objective goods and objective bads (and between each). The relative strengths of these different corresponding reasons are not in general fixed across theories. Therefore, the strengths of our reasons can only be fixed for at most one of these at a time (if their relationships aren’t fixed). And the positive arguments for fixing any specific one and not the others seem likely to be weak, so it really is plausible that none should be fixed.

Similarly, we can also be uncertain about tradeoffs, strengths and intensities within the same type of welfare for a human, too, e.g. just degrees of unpleasantness, resulting in another two envelopes problem. For example, I’m uncertain about the relative intensities and moral disvalues of pains I’ve experienced.[14] In general, people may use multiple reference points with which they’re familiar, like multiple specific experiences or intensities, and be uncertain about how they relate to one another.

There could also be non-welfarist moral reasons to consider, like duties, rights, virtues, justifiability and reasonable complaints (under contractualism), special relationships, and specific instances of any of these. We can be uncertain about how they relate to each other and the various types of welfare, too.

So, what do we do? We could separately fix and normalize by each possible (typically human-based) reference point, a specific moral reason, use intertheoretic comparisons relative to it, e.g. the expected value of belief-like preferences (cognitive desires) in us and other animals relative to the value of some particular (human) pleasure. I’ll elaborate here.

We pick a very specific reference point or moral reason and fix its moral weight as a common unit relative to which we measure everything else. takes the role of in the human-relative method A in the background section. We measure the moral weights of humans (or specific human welfare concerns) like and that of chickens like , and we do the same for everything else. And we also do all of this separately for every possible (typically human-based) reference point .

For uncertainty between choices of reference points, e.g. between a human pleasure and a human belief-like preference, we would apply a different approach to moral uncertainty that does not depend on intertheoretic comparisons of value or reasons, e.g. a moral parliament.[15] Or, when we can fix (or bound or get a distribution on) the ratios between all pairs of reference points in a subet of them, we could take a weighted sum across the reference points (or subsets of them), like in maximizing expected choiceworthiness and calculate the expected moral weights of chickens and humans (on that subset of reference points) as [16]

In either case, it’s essentially human-relative, if and because is almost always a human reference point.

There are some things we could say about vs with some constraints on the relationship between the distributions of and . Using the same numbers as Karnofsky (2018)’s and assuming

  1. and are both nonnegative,

  2. is positive with probability at least 50%, i.e. , and

  3. Whatever value reaches, reaches a value at least 1/​100th as high with at least 50% of the probability, i.e. for all (to replace with probability 50%),

then[17]

like in Karnofsky (2018)’s illustration of the human-relative method A. In general, we multiply the probability ratio (50% here) by the value ratio (1/​100 here) to get the ratio of expected moral weights (0.005 here). We can also upper bound with a multiple of by reversing the inequalities between the probabilities in 2 and 3.[18]

What can we say about the ratio of expected moral weights?

Would we end up with a ratio of expected moral weights between chickens and humans that’s relatively friendly to chickens? This will depend on the details and our credences.

Consider a lower bound for the chicken’s expected moral weight relative to a human’s. Say we fix some human reference point and corresponding moral reason.

As in the inequality from the previous section, we might think that whatever reason applies to a given human and with whatever strength, a chicken has at least 50% of the probability of having the same or a similar reason apply, but with strength only at least 1/​100th of the human’s (relative to the reference point). That would give a ratio of 0.005. Or something similar with different numbers.

We might expect something like this because the central moral reasons from major moral theories seem to apply importantly to farmed chickens with probability not far lower than they do to humans.[19] Let’s consider several:

  1. However intensely a human can suffer, can a chicken suffer at least 1/​100th as intensely? I would go further and say a chicken has a decent chance of having the capacity to suffer similarly intensely, e.g. at least half as intensely.

    1. Consider some intensity of suffering, and take a physical pain in a human, say a bone break or burn, that would induce it in a typical human under typical circumstances. Then, it seems not very unlikely — e.g. at least a 5% probability — that a similar pain in a typical chicken (e.g. a similar bone break or burn with a similar whole-body proportion of affected pain signaling nerves) would result in a similar intensity of suffering. However intense the suffering in a human being burned or boiled alive, it doesn’t seem too unlikely a chicken could suffer similarly intensely under the same conditions.

    2. If intensity scales as a function of relative motivational salience or attention — i.e. relative to their maximum possible or the hypothetical full pull of their attention —, or the proportion of suffering-contributing neurons firing (per second) or the proportion of just-noticeable differences away from indifference, then this doesn’t seem to favour humans in particular at all. In general, suppose that the intensity or disvalue of human suffering scales as some function relative to some underlying variable , e.g. a measure of motivational salience, attention, neurons firing per second, or just-noticeable differences, as . could scale very aggressively, even exponentially. Still, the function can be reinterpreted as a function scaling in the proportion of the maximum of for humans, , as . Then, we might assign a modest probability to the disvalue in chicken suffering also scaling roughly like , with for the chicken being relative to the chicken’s own maximum value of .

  2. Do chickens have important belief-like preferences? I will discuss this further in another piece, but defend it briefly here. If conscious hedonic states or conscious felt desires count as or ground belief-like preferences, as I discussed in a previous piece, then chickens probably do have belief-like preferences. Rats and pigs also seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[20] Perhaps the same would hold for chickens (it hasn’t been studied in birds, as far as I know). Perhaps they can generalize this further to unpleasantness or aversion, which would constitute their concepts of bad and worse. The strengths of animals’ belief-like preferences would be a separate issue, but interpersonal comparisons of belief-like preferences in general may be impossible (or extremely vague), even between humans, so it wouldn’t even be clear that any given human has more at stake for belief-like preferences than the typical farmed chicken, or vice versa, of course. There could be no fact of the matter.

  3. Do chickens have important rights or do we have important duties to them? Yes, according to Regan (1983, 1989) and Korsgaard (2018, 2020), the former modifying the Kantian position and the latter extending Kantian arguments to other animals and further claiming general interpersonal incomparability in Korsgaard, 2020. Furthermore, even if we did recognize duties only to rational beings who can recognize normative concepts like Kant originally did, as above, their conscious hedonic states and felt desires or generalizable discrimination could qualify as or ground their normative concepts. Other animals are also plausibly rational to some extent, too, even if minimal.

  4. Should our actions be justifiable to chickens, real or hypothetical trustees for them (Scanlon, 1998, p.183), or idealized rational versions of them? If yes, then chickens could be covered by contractualism, and what’s at stake for them seems reasonably large, given points 1 and 2 and their severe suffering on factory farms. See also the last two sections, on contractualist protections for animals and future people, in Ashford and Mulgan, 2018.

  5. Could the capacity to mount reasonable complaints be enough to be covered under contractualism? Can chickens actually mount reasonable complaints? If yes to both, then chickens could be covered by contractualism. Chickens can and do complain about their situations and mistreatment in their own ways (vocalizations i.e. gakel-calls, feelings of unpleasantness and aversion, attempts to avoid, etc.), and what makes a complaint reasonable could just be whether the reasons for the complaint are strong enough relative to other reasons (e.g. under the Parfit’s Complaint Model or Scanlon’s modified version, described in Scanlon, 1998, p.229), which does not require (much) rationality on the part of the complainant. Severe suffering, like what factory farmed chickens endure, seems like a relatively strong reason for complaint.

  6. How do virtues guide our treatment of chickens? The virtues of compassion, beneficence and justice seem applicable here, given their circumstances.

  7. Do we have any special obligations to chickens? We — or chicken farmers, at least, and perhaps as consumers indirectly — are responsible for their existences and lives, like we are for those of our companion animals and children.

See also the articles by Animal Ethics on the status of nonhuman animals under various ethical theories and the weight of animal interests.

However, many of the comparisons here probably do in fact depend on comparisons across moral theories, e.g. Kant’s original animal-unfriendly position vs Regan and (perhaps) Korsgaard’s animal-friendly positions. The requirement of (sufficient) rationality for Kant’s reasons to apply could be an inherently moral claim, not a merely empirical one. If Regan and Korsgaard don’t require rationality for moral status, are they extending the same moral reasons Kant recognizes to other animals, or grounding different moral reasons? They might be the same intrinsically, if we see the restriction to rational beings as not changing the nature of the moral reasons. Perhaps the moral reasons come first, and Kant mistakenly inferred that they apply only to rational beings. Or, if they are different, are they similar enough that we can identify them anyway? On the other hand, could the kinds of reasons Regan and Korsgaard recognize as applying to other animals be far far weaker than Kant’s that apply to humans or incomparable to them? Could Kant’s apply to other animals directly with modest probability anyway?

Similar issues could arise between contractualist theories that protect nonrational (or not very rational) beings and those that only protect (relatively) rational beings. I leave these as open problems.

Objections

In this section, I describe and respond to some potential objections to the approach and rationale for intertheoretic comparisons of moral weights I’ve described.

Conscious subsystems

First, it should be human welfare standardly and simultaneously accessed for report that we fix. There could be multiple conscious (or otherwise intrinsically morally considerable) subsystems in a brain to worry about — whether inaccessible in general or not accessed at any particular time — effectively multiple moral patients with their own moral interests in each brain. Our basic moral intuitions about the value of human welfare and the common facts we’re trying to explain probably do not reflect any inaccessible conscious subsystems in our brains, and in general would plausibly only reflect conscious subsystems when they are actually accessed. So, we should normalize relative to what we actually access. It could then be that the number of such conscious subsystems scales in practice with the number of neurons in a brain, so that the average human would have many more of them in expectation, and so could have much greater expected moral weight than other animals with fewer neurons (Fischer, Shriver & St. Jules, 2023 (EA Forum post)).

In the most extreme case, we end up separately counting overlapping systems that differ only by a single neuron (Mathers, 2021) or even a single electron (Crummett, 2022), and the number of conscious subsystems may grow polynomially or even exponentially with the number of neurons or the number of particles, by considering all connected subsets of neurons and neural connections or “connected” subsets of particles.[21] Even a small probability on an aggressive scaling hypothesis could lead to large predictable expected differences in total moral weights between humans, and could give greater expected moral weight to the average whale with more neurons than the average human (List of animals by number of neurons—Wikipedia). With a small but large enough probability to fast enough scaling with the number of neurons or particles, a single whale could have more expected moral weight than all living humans combined. That seems absurd.

In this case, how we decide to individuate and count conscious systems seems to be a matter of moral uncertainty. Empirically, I am pretty confident that both the system that is my whole brain is conscious and that the system that is my whole brain excluding any single neuron or electron is conscious. I just don’t think I should count these systems separately to add up. And then, even if I should assign some non-negligible probability that I should count such systems separately and that the same moral reasons apply views on counting conscious systems — this would be a genuine identification of moral reasons across different moral theories, not just identifying the same moral reasons across different empirical views —, it seems far too fanatical if I prioritize humans (or whales) because of the tiny probability I assign to the number of conscious subsystems of a brain scaling aggressively with the number of neurons or electrons. I outline some other ways to individuate and count subsystems in this comment, and I would expect these to give a number of conscious subsystems scaling at most roughly proportionally in expectation with the number of neurons.

There could be ways to end up with conscious subsystems scaling with the number of neurons that are more empirically based, rather than dependent on moral hypotheses. However, this seems unlikely, because the apparently valuable functions realized in brains seem to occur late in processing, after substantial integration and high-level interpretation of stimuli (see this comment and Fischer, Shriver & St. Jules, 2023 (EA Forum post)). Still, even a small but modest probability could make a difference, so the result will depend on your credences.

Unresolvable disagreements

Second, it could also be difficult for intelligent aliens and us, if both impartial, to agree on how to prioritize humans vs the aliens under uncertainty, if and because we’re using our own distinct standards to decide what matters and how much. Suppose the aliens have their own concept of a-suffering, which is similar to, but not necessarily identical to our concept of suffering. It may differ from human suffering in that some functions are missing, or additional functions are present, or the number of times they’re realized differ, or the relative or absolute magnitudes of (e.g. cognitive) effects differ. Or, if they haven’t gotten that far in their understanding of a-suffering, it could just be the fact that a-suffering feels different or might feel different from human suffering, so their still vague concept picks out something potentially different from ours. Or vice versa.

In the same way chickens matter relatively more on the human-relative view than chickens do on the chicken-relative view, as above from Karnofsky, 2018, humans and the aliens could agree on (almost) all of the facts and have the same probability distributions for the ratio of the moral weight of human suffering to the moral weight of a-suffering, and yet still disagree on expected moral weights and about how to treat each other. Humans could weigh humans and aliens relative to human suffering, while the aliens could weigh humans and aliens relative to a-suffering. In relative terms and for prioritization, the aliens would weigh us more than we weigh ourselves, but we’d weigh them more than they weigh themselves.

One might respond that this seems too agent-relative, and we should be able to agree on priorities if we agree on all the facts, and share priors and the same impartial utilitarian moral views. However, while consciousness remains unsolved, humans don’t know what it’s like to be the aliens or to a-suffer, and the aliens don’t know what it’s like to be us or suffer like us. We have access to different facts, and this is not a source of agent-relativity, or at least not an objectionable one. Furthermore, we are directly valuing our own experiences, human suffering, and the aliens are directly valuing their own, a-suffering, and if these differ enough, then we could also disagree about what matters intrinsically or how. This seems no more agent-relative than the disagreement between utilitarians that disagree just on whether hedonism or desire theory is true: a utilitarian grounding welfare based on human suffering and a utilitarian doing so based on a-suffering just disagree about the correct theory of wellbeing or how it scales.

Epistemic modesty about morality

Still, perhaps both we and the aliens should be more epistemically modest[22] about what matters intrinsically and how, and so give weight to the direct perspectives of the aliens. If we try to entertain and weigh all points of view, then we would need to make and agree on genuine intertheoretic comparisons of value, which seems hard to ground and justify, or else we’d use an approach that doesn’t depend on intertheoretic comparisons. This could bring us and the aliens closer to agreement about optimal resource allocation, and perhaps convergence under maximal epistemic modesty, assuming we also agree on how to weigh perspectives and an approach to normative uncertainty.

Doing this can take some care, because we’re uncertain about whether the aliens have any viewpoint at all for us to adopt, and similarly they could be uncertain about us having any such viewpoint. This could prevent full convergence.

On the other hand, chickens presumably don’t think at all about the moral value of human welfare in impartial terms, so there very probably is no such viewpoint to adopt on their behalf, or else only one that’s extremely partial, e.g. some chickens may care about some humans to which they are emotionally attached, and many chickens may fear or dislike humans. Chickens’ points of view therefore wouldn’t grant humans much or any moral weight at all, or may even grant us negative overall weight instead. However, the right response here may instead be against moral impartiality, not against humans in particular. Indeed, most humans seem to be fairly partial, too, and we might partially defer to them, too. Either way, this perspective doesn’t look like the chicken-relative comparison method B from Karnofsky, 2018 that grants humans astronomically more weight than chickens.

How might we get such a perspective? We might idealize: what would a chicken believe if they had the capacities and were impartial, while screening off the value from those extra capacities. Or, we might consider a hypothetical impartial human or other intelligent being whose capacities for suffering are like those of a chicken, whatever those may be. Rather than actual viewpoints for which we have specific evidence of their existence, we’re considering conceivable viewpoints.

I’ll say here that this seems pretty speculative and weird, so I have some reservations about this, but I’m not sure either way.

A plausibly stronger objection to epistemic modesty about moral (and generally normative) stances is that it can undermine whatever moral views you or I or anyone else holds too much, including the foundational beliefs of effective altruists or assumptions in the project of effective altruism, like impartiality and the importance of beneficence. I am strongly disinclined to practically abandon my own moral views this way. I think this is a more acceptable position than rejecting epistemic modesty about non-normative claims, especially for a moral antirealist, i.e. those who reject stance-independent moral facts. We may have no or only weak reasons for epistemic modesty about moral facts in particular.

On the other hand, rather than abandoning foundational beliefs, it may actually support them. It may capture impartiality in a fairly strong sense by weighing each individual’s normative stance(s). Any being who suffers finds their own suffering bad in some sense, and this stance is weighed. A typical parent cares a lot for their child, so the child gets extra weight through the normative stance of the parent. Some humans particularly object to exploitation and using others as means to ends, and this stance is weighed. Some humans believe it’s better for far more humans to exist, and this stance is weighed. Some humans believe it’s better for fewer humans to exist, and this stance is weighed. The result could look like a kind of impartial person-affecting preference utilitarianism, contractualism or Kantianism (see also Gloor, 2022),[23] but relatively animal-inclusive, because whether or not other animals meet some thresholds for rationality or agency, they could have their own perspectives on what matters, e.g. their suffering and its causes.

If normative stances across species, like even across humans, are often impossible to compare, then the implications for prioritization could be fundamentally indeterminate, at least very vague. Or, they could be dominated by those with the most fanatical or lexical stances, who prioritize infinite value at stake without trading it off against mere finite stakes. Or, we might normalize each individual’s values (or utility function) by their own range or variance in value (Cotton-Barratt et al., 2020), and other animals could outweigh humans through their numbers in the near term.

Other applications of the approach

What other intertheoretic comparisons of value could this epistemic approach apply to? I will consider:

  1. Realism vs illusionism about phenomenal consciousness.

  2. Moral realism vs moral antirealism.

  3. Person-affecting views vs total utilitarianism.

First, realism vs illusionism about phenomenal consciousness. Illusionists deny the phenomenal nature of consciousness and the existence of qualia as “Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective” (Frankish, 2012, preprint), introduced by Lewis (1929, pp.121, 124-125). Realists accept the phenomenal nature of consciousness and/​or qualia. Illusionists do not deny that consciousness exists.[24] In section 5.2, Kammerer, 2019 argues that if phenomenal consciousness would ground moral value if it existed, it would be an amazing coincidence for pain to be as bad under (strong) illusionism, which denies the existence of phenomenal consciousness, as it is under realism which accepts the existence of phenomenal consciousness. However, if you’re already a moral antirealist or take an epistemic approach to intertheoretic comparisons, then it seems reasonable to hold the strengths of your reasons to be the same, but just acknowledge that you may have misjudged their source or nature. Rather than phenomenal properties as their source, it could be quasi-phenomenal properties, where “a quasi-phenomenal property is a non-phenomenal, physical property (perhaps a complex, gerrymandered one) that introspection typically misrepresents as phenomenal” (Frankish, 2017, p. 18), or even the beliefs, appearances or misrepresentations themselves. Frankish (2012, preprint) proposed a theory-neutral explanandum for consciousness:

Zero qualia The properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective.

These zero qualia could turn out to be phenomenal, under realism, or non-phenomenal and so quasi-phenomenal under illusionism (Frankish, 2012, preprint), but the judgements to be captured are the same, so it seems reasonable to treat the resulting reasons as the same. Or, we could use a less precise common ground: consciousness, whatever it is.

A similar approach could be taken with respect to uncertainty between metaethical positions, using our moral judgements or intuitions as the common facts. Again, we may be wrong about the nature of what they’re supposed to refer to or even the descriptive reality of these moral judgements and intuitions — e.g. whether they express propositions, as in cognitivism, or desires, emotions or other pro-attitudes and con-attitudes, as in non-cognitivism (van Roojen, 2023), and, under cognitivism, whether they are stance-independent or stance-dependent —, but we will still have them in any case. I’d judge torture very negatively regardless of my metaethical stance. Even more straightforwardly, for any specific moral realist stance, there’s a corresponding subjectivist stance that recognizes the exact same moral facts (and vice versa?), but just interprets them as stance-dependent rather than stance-independent. Any non-cognitivist pro-attitude or desire could be reinterpreted as expressing a belief (or appearance) that something is better.[25] This could allow us to at least match identical moral theories, e.g. the same specific classical utilitarianism, under the different metaethical interpretations.

Riedener (2019) proposes a similar and more general constructivist approach based on epistemic norms.[26] He illustrates with person-affecting views vs total utilitarianism, arguing for holding the strengths of reasons to benefit existing people the same between welfarist person-affecting views and total utilitarianism,[27] which would tend to favour total utilitarianism under moral uncertainty. However, if we’re comparing a Kantian person-affecting view and total utilitarianism, he argues that we may have massively misjudged our reasons other than for beneficence between the two views. So the comparison is more complex, and reasons for beneficence could be stronger under total utilitarianism, while our other reasons could be stronger under Kantian views, and we should balance epistemic norms and the particulars to decide these differences.

To be clear, I’m much less convinced of the applications in these cases, and there are important reasons for doubt:

  1. Realist accounts of phenomenal consciousness are designed primarily to explain our actually (allegedly) phenomenal properties, which illusionists deny, while illusionism is designed primarily to explain our beliefs about consciousness or quasi-phenomenal properties that lead to them, so realist and illusionist accounts disagree about what is to be explained. That phenomenal properties are in practice or even by physical necessity quasi-phenomenal could be incidental to a realist, hence philosophical zombie (p-zombie) thought experiments (see Kirk, 2023 for a standard reference). If a moral position directly grants moral value to phenomenal properties in virtue of being phenomenal, then this is not a common ground with moral positions that instead grant moral value to quasi-phenomenal properties in virtue of being quasi-phenomenal or to the resulting dispositions. That being said, I think moral positions should generally not ground value on phenomenal consciousness specifically, but instead on consciousness, whatever it is.

  2. Similarly, moral realists take actually (allegedly) stance-independent moral facts as fundamental, which to them may not derive from (actual, hypothetical or idealized) moral judgements or intuitions, which are stances that could differ between people, while subjectivists and non-cognitivists seem to take our (actual, hypothetical or idealized) moral judgements or intuitions as fundamental. That moral judgements and intuitions are evidence about and sometimes track stance-independent moral facts isn’t enough for a moral realist, because they could be mistaken. There does not seem to be a common ground for granting moral value between these positions.

  3. Those holding apparently welfarist person-affecting views may disagree that total utilitarianism is designed to explain the same kinds of reasons or facts as their views are. They may understand their moral reasons in ways similar to a Kantian or contractualist or otherwise reject (standard conceptions of) axiology, but also deny act-omission distinctions and see in others the same kinds of reasons they have to promote their own interests. It’s not preference satisfaction or even welfare per se that matters, but that things go better or worse according to the preferences, points of view, ends or normative stances of individuals who have them.[28] And merely possible people don’t (actually) have them.

In each of the above cases, one view takes as fundamental and central measures or consequences of what the other view takes as fundamental and central.[29] This will look like Goodhart’s law to those who insist it’s not these measures or consequences that matter but what is being measured or the causes. Those holding one of the pairs of views could complain that the others are gravely mistaken about what matters and why, so Riedener (2019)’s conservatism may not tell us much about how to weigh the views. The comparisons seem less reasonable, and we could end up with two envelopes problems again, fixing one theory’s fundamental grounds and evaluating both theories relative to it.

On the other hand, while not every pair of theories of consciousness or the value of welfare will agree on common facts to explain, many will. For example, realists about phenomenal consciousness will tend to agree with each other that it’s (specific) phenomenal properties themselves that their theories are designed to explain, so we could compare reasons across realist theories. Illusionists will tend to agree that it’s our beliefs (or appearances) about consciousness that are the common facts to explain, so we could compare reasons across illusionist theories. And theories of welfare and its value are designed to explain, among other things, why suffering is bad or seems bad. So, many reason comparisons can be grounded in practice, even if not all. And regardless of the reasons and whether they can be compared across all views, the common facts from which comparable reasons derive are based on human experience, so our moral views are justifiably human-relative.

  1. ^

    Karnofsky (2018) wrote:

    In this case, a >10% probability on the human-inclusive view would be effectively similar to a 100% probability on the human-centric view.

    I assume he meant “human-centric view” instead of “human-inclusive view”, so I correct the quote with square brackets here.

  2. ^

    if and only if is equal to a constant with probability 1, and if is nonnegative and not equal to a constant with probability 1. This follows from Jensen’s inequality, because defined by is convex.

  3. ^

    I would either use per unit averages for chickens and humans, respectively, or assume here that the value scales in proportion (or at least linearly) with each unit of measured welfare for each of humans and chickens, separately.

  4. ^

    However, some may believe objective moral value is threatened by illusionism about phenomenal consciousness, which denies that phenomenal consciousness exists. These positions do still recognize that consciousness exists, but they deny that it is phenomenal. We could just substitute an illusionist account of consciousness wherever phenomenal consciousness was used in our ethical theories, although some further revisions may be necessary to accommodate differences. For further discussion, see Kammerer, 2019, Kammerer, 2022 or a later section in this piece. The difference here is because some ethical theories directly value phenomenal consciousness specifically, and not (or less) consciousness in general.

    Other examples could be free will, libertarian free will specifically or god(s) which may turn out not to exist, and so moral theories that tied some reasons specifically to them would lose those reasons.

    If a moral theory only places value on things that actually exist in some form, while being more agnostic about their nature, then the value can follow the vague and revisable concepts of those things.

  5. ^

    Except possibly for indirect and instrumental reasons. It’s useful to know water is H2O.

  6. ^

    This could be cashed out in terms of acquaintance, as in knowledge by acquaintance (Hasan, 2019, Duncan, 2021, Knowles & Raleigh, 2019), or appearance, as in phenomenal conservatism (Huemer, 2013). Adam Shriver made a similar point in conversation.

  7. ^

    This may be more illustrative than literal for me. Personally, it’s more that other people’s suffering seems directly and importantly bad to me, or indirectly and importantly bad through my emotional responses to their suffering.

  8. ^

    However, which kind of “seeming” or appearance should be used can depend on the theory of wellbeing, i.e. unpleasantness under hedonism, cognitive desires or motivational salience under desire theories and preferences under preference theories. I concede later that we may need to separate by these very broad accounts of welfare (and perhaps more finely) rather than treat them all as generating the same moral reasons.

  9. ^

    From conversation with multiple people, something like this seems to be the standard view.

  10. ^

    Our sympathetic responses to the suffering of another individual — chicken, human or otherwise — don’t necessarily reliably track how bad it is for them from their own perspective, but is probably closer for other humans, because of greater similarity between humans (neurological, functional, cognitive, psychological, behavioural).

  11. ^

    (or undefined) if , , and with nonzero probability, because we get with nonzero probability. is undefined if , and with nonzero probability, because we get with nonzero probability.

    However, in principle, humans in general or each proposed type of wellbeing could not matter with nonzero probability, so we could get a similar problem normalizing by human welfare or moral weights.

  12. ^

    There may be some ways to address the issue.

    You could treat the 0 moral weight like an infinitesimal and do arithmetic with it, but I think this entirely denies the possibility that chickens don’t matter at all. This seems ad hoc and to have little or no independent justification.

    You could take conditional expected values in the denominator (and numerator) first that gives a nonzero value, assuming Cromwell’s rule, before taking the ratio and expected value of the ratio. In other words, you take the expected value of a ratio of conditional expected values of moral weights. Then, in effect, you’re treating the conditional expected value of chicken moral weight as equal across some views. Most naturally, you would take the conditional expected values over descriptive uncertainty, conditional on each fixed normative stance — so that the resulting prescriptions would agree with each normative stance — and then take the expected value of the ratio across these normative stances/​theories (over normative uncertainty).

  13. ^

    If you had already measured all the liquid water directly and precisely, you wouldn’t expect any more or less liquid water from finding out ice is also water.

  14. ^

    I even doubt that there is any precise fact of the matter for the ratio of their intensities or moral disvalue.

  15. ^

    Approaches include Open Philanthropy’s worldview diversification approach (Karnofsky, 2018), variance voting (MacAskill et al., 2020, Ch4), moral parliaments (Newberry & Ord, 2021), a bargain-theoretic approach (Greaves & Cotton-Barratt, 2019), or the Property Rights Approach (Lloyd, 2022). For an overview of moral uncertainty, see MacAskill et al., 2020.

  16. ^

    With multiple values for a given , e.g. a distribution of values, we could get a distribution or set of expected moral weights for chickens and humans. To these, we could apply an approach to moral uncertainty that doesn’t depend on intertheoretic reason comparisons.

  17. ^

    Let and be the quantile functions of and , respectively. Then, for p between 0 and 1,

    Then,

  18. ^

    and gives .

  19. ^

    However, some major moral theories don’t weigh reasons by summation, aggregate at all or take expected values. The expected moral weights of chickens and humans may not be very relevant in those cases.

  20. ^

    Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3. Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants. Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.

    However, Mason and Lavery (2022) caution:

    But could such results merely reflect a “blindsight-like” guessing: a mere discrimination response that need not reflect underlying awareness? After all, as we have seen for S.P.U.D. subjects, decerebrated pigeons can use colored lights as DSs (128), and humans can use subliminal visual stimuli as DSs [e.g., (121)]. We think several refinements could reduce this risk.

  21. ^

    There are exponential (non-tight) upper bounds for the number of connected subgraphs of a graph, and hence connected neural subsystems of a brain (Pandey & Patra, 2021, Filmus, 2018). However, not any such connected subsystem would be conscious. Also, with bounded degree, i.e. a bounded number of connections/​synapses per neuron in your set of brains under consideration, the number of connected subgraphs can be bounded above by a polynomial function of the number of neurons (Eppstein, 2013).

  22. ^

    For a defense of epistemic modesty, see Lewis, 2017.

    Aumann’s agreement theorem, which supports convergence in beliefs between ideally rational Bayesians with common priors about events of common knowledge, may not be enough for convergence here. This is because our conscious experiences are largely private and not common knowledge. Even if they aren’t inherently private, without significant advances in theory or technology that would resolve remaining factual disagreements or far more introspection and far more detailed introspective reports than are practical, they’ll remain largely private in practice.

    Or, our priors could differ, based on our distinct conscious experiences, which we use as references to understand moral patienthood and often moral value in general.

  23. ^

    I’d only be inclined to weigh the actual or idealized intrinsic/​terminal values of actual moral patients, not any possible or conceivable moral patients or perspectives. The latter also seems particularly ill-defined. How would we weigh possible or conceivable perspectives?

  24. ^

    The term ‘illusionism’ seems prone to cause misunderstanding, and multiple illusionists have taken issue with the term, including Graziano (2016, ungated), Humphrey (2016) and Veit and Browning (2023, preprint).

  25. ^

    See my previous piece discussing how desires and hedonic states may be understood as beliefs or appearances of normative reasons. Others have defended desire-as-belief, desire-as-perception and generally desire-as-guise or desire-as-appearance of normative reasons, the good or what one ought to do. See Schroeder, 2015, 1.3 for a short overview of different accounts of desire-as-guise of good, and Part I of Deonna (ed.) & Lauria (ed), 2017 for more recent work on and discussion of such accounts and alternatives. See also Archer, 2016, Archer, 2020 for some critiques, and Milona & Schroeder 2019 for support for desire-as-guise (or desire-as-appearance) of reasons. A literal interpretation of Roelofs (2022, ungated)’s “subjective reasons, reasons as they appear from its perspective” would be as desire-as-appearance of reasons.

  26. ^

    Riedener, 2019 writes, where IRCs is short for intertheoretic reason-comparisons:

    So I’ll propose a version of this approach, on which ought-facts are grounded in epistemic norms. In other words, I’ll propose a form of constructivism about IRCs. If I’m right, IRCs are not facts out there that hold independently of facts about morally uncertain agents. They hold in virtue of being the result of an ideally reasonable deliberation, in terms of certain epistemic norms, about what you ought to do in light of your uncertainty.

    So very roughly, these norms suggest that without any explanation, you shouldn’t assume that you’ve always systematically and radically misjudged the strength of your everyday paradigm reasons. And they imply that you should more readily assume you may have misjudged some reasons if you have an explanation for why and how you may have done so, or if these reasons are less mundane and pervasive. This seems intuitively plausible. But Simplicity, Conservatism and Coherence might be false, or not quite correct as I’ve stated them, or there might be other and more important norms besides them.27 My aim is not to argue for these precise norms. I’m happy if it’s plausible that some such epistemic norms hold, and that they can constrain the IRCs or ought-judgements you can reasonably make. If that’s so, we can invoke a form of constructivism to ground IRCs. We can understand truth about IRCs as the outcome of ideally reasonable deliberation – in terms of principles like the above – about what you ought to do in light of your uncertainty. By comparison, consider the view that truth in first-order moral theory is simply the result of an ideal process of systematizing our pre-theoretical moral beliefs.28 On this view, it’s not that there’s some independent Platonic realm of moral facts, and that norms like simplicity and coherence are best at guiding us towards it. Rather, the principles are first, and ‘truth’ is simply the outcome of the principles. We can invoke a similar kind of constructivism about IRCs. On this view, principles like Simplicity, Conservatism and Coherence are not justified in virtue of their guiding us towards an independent realm of ought-facts or IRCs. Rather, they help constitute this realm.

    So this provides an answer to why some ought-facts or IRCs hold. It’s not because of mind-independent metaphysical facts about how theories compare, or how strong certain reasons would be if we had them. It’s simply because of facts about how to respond reasonably to moral evidence or have reasonable moral beliefs. Ultimately, we might say, it’s because of facts about us – about why we might have been wrong about morality, and by how much and in what way, and so on.

  27. ^

    Riedener (2019) writes:

    According to TU, you have all the reasons that you have according to PAD – reasons to benefit existing others – but also some additional reasons beyond them. So on this interpretation, the least radical change in your credences and the most simple ultimate credence distribution will be such that your reasons to benefit existing people are the same on both theories. Unless you have some additional beliefs that could render other beliefs more coherent, this IRC will be most reasonable in light of the above principles.

  28. ^

    Rabinowicz and Österberg (1996) describe similar accounts as object versions of preference views, contrasting them with satisfaction versions, which are instead concerned with preference satisfaction per se. Also similar are actualist preference-affecting views (Bykvist, 2007) and conditional reasons (Frick, 2020).

  29. ^

    Or in the case of illusionism vs realism about phenomenal consciousness on one interpretation of illusionism, the comparisons are grounded based on such measures or consequences for both, i.e. the (real or hypothetical) dispositions for phenomenality/​qualia beliefs, but what matters are the quasi-phenomenal properties that lead to these beliefs, which are either actually phenomenal under realism or not under illusionism. On another interpretation of illusionism, it’s the beliefs themselves that matter, not quasi-phenomenal properties in general. For more on the distinction, see Frankish, 2021.

Crossposted to LessWrong (0 points, 0 comments)