I’m a program officer on the AI governance team at Open Philanthropy.
Jason Schukraft
Hi Dan,
Thanks for your questions. I’ll let Marcus and Peter answer the first two, but I feel qualified to answer the third.
Certainly, the large number of invertebrate animals is an important factor in why we think invertebrate welfare is an area that deserves attention. But I would advise against relying too heavily on numbers alone when assessing the value of promoting invertebrate welfare. There are at least two important considerations worth bearing in mind:
(1) First, among sentient animals, there may be significant differences in capacity for welfare or moral status. If these differences are large enough, they might matter more than the differences in the numbers of different types of animals.
(2) Second, at some point, Pascal’s Mugging will rear its ugly head. There may be some point below which we are rationally required to ignore probabilities. It’s not clear to me where that point lies. (And it’s also not clear that this is the best way to address Pascal’s Mugging.) There are about 440 quintillion nematodes alive at any given time, which sounds like a pretty good reason to work on nematode welfare, even if one’s credence in their sentience is really low. But nematodes are nothing compared to bacteria. There are something like 5 million trillion trillion bacteria alive at any given time. At some point, it seems as if expected value calculations cease to be appropriately action-guiding, but, again, it’s very uncertain where to draw the line.
Hi Denis,
Lots of really good questions here. I’ll do my best to answer.
-
Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.
-
Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
-
Is there something interesting here?: Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.
-
*Survival vs. exploratory mindset: Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.
-
Optimal hours of work per day: I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.
-
Learning a new field: I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.
-
Hard problems: I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.
-
Emotional motivators: When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.
-
Typing speed: No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.
-
Obvious questions: Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.
-
Tiredness, focus, etc.: Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)
-
Meta: I’d like to see others answer questions 1, 3, 6, 7, and 10.
-
Hi Roger,
There are different possible scenarios in which invertebrates turn out to be sentient. It might be the case, for instance, that panpsychism is true. So if one comes to believe that invertebrates are sentient because panpsychism is true, one should also come to believe that robots and plants are sentient. Or it could be that some form of information integration theory is true, and invertebrates instantiate enough integration for sentience. In that case, the probability that you assign to the sentience of plants and robots will depend on your assessment of their relevant level of integration.
For what it’s worth, here’s how I think about the issue: sentience, like other biological properties, has an evolutionary function. I take it as a datum that mammals are sentient. If we can discern the role that sentience is playing in mammals, and it appears there is analogous behavior in other taxa, then, in the absence of defeaters, we are licensed to infer that individuals of those taxa are sentient. In the past few years I’ve updated toward thinking that arthropods and (coleoid) cephalopods are sentient, but the majority of these updates have been based on learning new empirical information about these animals. (Basically, arthropods and cephalopods engage in way more complex behaviors than I realized.) When we constructed our invertebrate sentience table, we also looked at plants, prokaryotes, protists, and, in an early version of the table, robots and AIs of various sorts. The individuals in these categories did not engage in the sort of behaviors that I take to be evidence of sentience, so I don’t feel licensed to infer that they are sentient.
Hey Edo,
I definitely receive valuable feedback on my work by posting it on the Forum, and the feedback is often most valuable when it comes from people outside my current network. For me, the best example of this dynamic was when Gavin Taylor left extensive comments on our series of posts about features relevant to invertebrate sentience (here, here, and here) back in June 2019. I had never interacted with Gavin before, but because of his comments, we set up a meeting, and he has become an invaluable collaborator across many different projects. My work is much improved due to his insights. I’m not sure Gavin and I would ever have met (much less collaborated) if not for his comments on the Forum.
Sometimes, they ask us to instead donate the money to a charity on their behalf, which we are also willing to do.
Oh, cool. I didn’t realize this was a possibility. I’ve always claimed the money and then donated the same amount to Rethink Priorities (where I work). If I’m lucky enough to have the opportunity in the future, I’ll do this instead.
(I basically get paid to write content for the Forum, so I’m not really comfortable accepting the prize money.)
Hey Michael,
Thanks for your comment! The point you raise is a good one. I’ve thought about related issues over the last few months, but my views still aren’t fully settled. And I’ll just reiterate for readers that my tentative conclusions are just that: tentative. More than anything, I want everyone to appreciate how much uncertainty we face here.
We can crudely ask whether motivation is tied to the relative intensity of valenced experience or the absolute intensity of valenced experience. (‘Crudely’ because the actual connection between motivation and valenced experience is likely to be a bit messy and complicated.) If it’s the relative intensity, then, all else equal, a pain at the top end of an animal’s range is going to be very motivating, even if the pain has a phenomenal feel comparable to a human experiencing a very mild muscle spasm. If it’s absolute intensity, then, all else equal, a pain like that won’t be very motivating. I’m not sure what the right view is here, but the relative view that you endorse in the comment is certainly a live option, so let’s go with that.
If it’s relative intensity that matters for motivation, then natural selection needs a reason to generate big differences in absolute intensity. (Setting aside the fact that evolution sometimes goes kinda haywire.) You suggest the fitness benefit of a fine-grained valence scale, especially for animals that face many competing pulls on their attention. I agree that the resolution of an animal’s valence scale probably matters. But it’s unclear to me how much this tells us about differences in absolute intensity.
It seems possible to be better or worse at distinguishing gradations of valenced experience. It might be the case that animals with similar intensity ranges can differ in the number of intensity levels they can distinguish. (It might also be the case that animals with different intensity ranges have a similar number of intensity levels they can distinguish.) So if there were a fitness benefit to having 100 distinguishable gradations rather than 10, evolution could either select for animals with wider ranges or select for animals with better resolutions. (Or some combination thereof.) Considerations like the Weber-Fechner law incline me toward thinking an increase in resolution would be more efficient than an increase in range (though of course there are limits to how much resolution can be increased). But at this point I’m just speculating; there’s a lot more basic research that needs to be done to get a handle on these sorts of questions.
Research Summary: The Intensity of Valenced Experience across Species
Oh nice, that sounds really cool—definitely keep me updated!
Hey Peter,
Thanks for the kind words. There’s no current plan to pursue academic publication. This question comes up periodically at Rethink Priorities, and there’s a bit of disagreement about what the right strategy is here. Speaking personally, I would love to see more of my work published academically. However, thinking about strategic decisions like these is not my comparative advantage, so I’m happy to defer to others on this question, and leadership at Rethink Priorities generally isn’t keen on using researcher hours to pursue academic publication. The main reason is the time cost. According to the prevailing view at Rethink Priorities, the time cost of pursuing publication normally doesn’t outweigh the benefits of widening the audience and earning credibility for the organization. Of course, there are exceptions: if there are special reasons to publish academically (e.g., fielding-building for welfare biology) or converting a report into an academic publication would take an unusually short time, then it might be worth it.
For now, the most plausible means by which my research will get published academically is through collaboration with others. For example, Bob Fischer recently generously offered to co-author a paper with me based on my report about differences in the subjective experience of time across species, which is now under review. He was thus able to significantly reduce the time burden on me. Naturally, I’m very open to collaboration with others in a similar vein.
Great, this is fantastic, thanks! Clearly there is a lot more I need to think about! I just sent you a message to arrange a chat. For anyone following this exchange, I’ll try to post some more thoughts on this topic after Adam and I have talked.
Hey Adam,
Thanks for your comment! I agree that the distinction between the sensory and affective components of pain experience is an important one that merits more discussion. I briefly considered including such a discussion, but the report was already long and I was hoping to avoid adding another layer of complexity. My assumption was that, while it’s possible for the two components to come apart, such dissociation is rare enough that we can safely ignore it at this level of abstraction. That could be a naïve assumption, though. Even if not, you’re right that by failing to take account of the different components, I’ve introduced an ambiguity into the report. When I refer to the intensity of pain, I intend to refer to the degree of felt badness of the experience (that is, the affective component). But the sensory component can also be said to be more or less intense, and some of the literature I cite either conflates the two components or refers to sensory intensity.
I would be interested to hear more of your thoughts about the Yue article and related work. Suppose it’s true that gamma-band oscillations reliably track the sensory intensity of pain experience and that for our purposes the sensory component is morally irrelevant. If sensory intensity and affective intensity are correlated in humans, do you think it’s reasonable to assume that the components are correlated in other mammals? If so, then we can still use gamma-band oscillations as a rough proxy for the thing we care about, at least in animals neurologically similar to humans.
Basically, my main questions are:
(1) How often and under what conditions does sensory intensity come apart from affective intensity in humans? (2) How can we use what we know about the components coming apart in humans to predict how often and under what conditions sensory intensity and affective intensity come apart in nonhuman animals?
If you’re interested, I’d love to schedule a call to talk further. This might be too big a topic to discuss easily via Forum comments.
Hey Michael,
I think this is an interesting idea. Unfortunately, I’m woefully ignorant about the relevant details, so it’s unclear to me if the differences between artificial neural networks and actual brains makes the analogy basically useless. Still, I think it would probably be worthwhile for someone with more specialized knowledge than myself to think through the analogy roughly along the lines you’ve outlined and see what comes of it. I’d be happy to collaborate if anyone (including yourself) wants to take up the task.
There are lots of potential points of contact. The most obvious is that to determine an individual’s possible intensity range of valenced experience, we have to think about the most intense (in the sense of most positive and most negative) experiences available to that individual. I don’t have a view about how long-tailed the distribution of pleasures and pains is in humans, but I agree that it’s a question worth investigating. And if there are differences in how long-tailed the distribution of valenced experiences is across species, that would entail differences in possible (though not necessarily characteristic) intensity range across species.
Happy to speak to something more specific if you had a particular question in mind.
Thanks for the clarification, Brian!
It’s plausible to assign split-brain patients 2x moral weight because it’s plausible that split-brain patients contain two independent morally relevant seats of consciousness. (To be clear, I’m just claiming this is a plausible view; I’m not prepared to give an all-things-considered defense of the view.) I take it to be an empirical question how much of the corpus callosum needs to be severed to generate such a split. Exploring the answer to this empirical question might help us think about the phenomenal unity of creatures with less centralized brains than humans, such as cephalopods.
This seems like a pretty good reason to reject a simple proportion account
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If there’s one thing I’ve learned from thinking about differences in the intensity of valenced experience, it’s that brains are really, really complicated and messy. Perhaps that’s the reason I’m less moved by the type of thought experiments you’ve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, it’s hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. I’m tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.
Hey Michael,
Thanks for engaging so deeply with the piece. This is a super complicated subject, and I really appreciate your perspective.
I agree that hidden qualia are possible, but I’m not sure there’s much of an argument on the table suggesting they exist. When possible, I think it’s important to try to ground these philosophical debates in empirical evidence. The split-brain case is interesting precisely because there is empirical evidence for dual seats of consciousness. From the SEP entry on the unity of consciousness:
In these operations, the corpus callosum is cut. The corpus callosum is a large strand of about 200,000,000 neurons running from one hemisphere to the other. When present, it is the chief channel of communication between the hemispheres. These operations, done mainly in the 1960s but recently reintroduced in a somewhat modified form, are a last-ditch effort to control certain kinds of severe epilepsy by stopping the spread of seizures from one lobe of the cerebral cortex to the other. For details, see Sperry (1984), Zaidel et al. (1993), or Gazzaniga (2000).
In normal life, patients show little effect of the operation. In particular, their consciousness of their world and themselves appears to remain as unified as it was prior to the operation. How this can be has puzzled a lot of people (Hurley 1998). Even more interesting for our purposes, however, is that, under certain laboratory conditions, these patients seem to behave as though two ‘centres of consciousness’ have been created in them. The original unity seems to be gone and two centres of unified consciousness seem to have replaced it, each associated with one of the two cerebral hemispheres.
Here are a couple of examples of the kinds of behaviour that prompt that assessment. The human retina is split vertically in such a way that the left half of each retina is primarily hooked up to the left hemisphere of the brain and the right half of each retina is primarily hooked up to the right hemisphere of the brain. Now suppose that we flash the word TAXABLE on a screen in front of a brain bisected patient in such a way that the letters TAX hit the left side of the retina, the letters ABLE the right side, and we put measures in place to ensure that the information hitting each half of the retina goes only to one lobe and is not fed to the other. If such a patient is asked what word is being shown, the mouth, controlled usually by the left hemisphere, will say TAX while the hand controlled by the hemisphere that does not control the mouth (usually the left hand and the right hemisphere) will write ABLE. Or, if the hemisphere that controls a hand (usually the left hand) but not speech is asked to do arithmetic in a way that does not penetrate to the hemisphere that controls speech and the hands are shielded from the eyes, the mouth will insist that it is not doing arithmetic, has not even thought of arithmetic today, and so on—while the appropriate hand is busily doing arithmetic!
So I don’t think it’s implausible to assign split-brain patients 2x moral weight.
I also think it’s possible to find empirical evidence for differences in phenomenal unity across species. There’s some really interesting work concerning octopuses. See, for example, “The Octopus and the Unity of Consciousness”. (I might write more about this topic in a few months, so stay tuned.)
As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So I’m not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)
Hi Michael,
Thanks for the comment and thanks for prompting me to write about these sorts of thought experiments. I confess I’ve never felt their bite, but perhaps that’s because I’ve never understood them. I’m not sure what the crux of our disagreement is, and I worry that we might talk past each other. So I’m just going to offer some reactions, and I’ll let you tell me what is and isn’t relevant to the sort of objection you’re pursuing.
-
Big brains are not just collections of little brains. Large brains are incredibly specialized (though somewhat plastic).
-
At least in humans, consciousness is unified. Even if you could carve out some smallish region of a human brain and put it in a system such that it becomes a seat of consciousness, that doesn’t mean that within the human brain that region is itself a seat of consciousness. (Happy to talk in much more detail about this point if this turns out to be the crux.)
-
Valence intensity isn’t controlled by the raw number of neurons firing. I didn’t find any neuroscience papers that suggested there might be a correlation between neuron count and valence intensity. As with all things neurological, the actual story is a lot more complicated than a simple metric like neuron count would suggest.
-
Not sure where this fits in, but if you yoke two brains together, it seems to me you’d have two independent seats of consciousness. There’s probably some way of filling out the thought experiment such that that would not be the case, but I think the details actually matter here, so I’d have to see the filled-out thought experiment.
-
That’s fine by me!