Thanks for doing this AMA. I’m curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question “Is an insect conscious?” or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?
The Open Phil conversation notes with Brian Tomasik say:
Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism
(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik’s well-known analogy is that there’s no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there’s something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn’t give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn’t give you an objective definition of a table.)
However, in the report, you write as though there is an objective definition (e.g. in the “Consciousness, innocently defined” section), and I feel most readers of the report will get that impression, e.g. that there’s an objective answer as to whether insects are conscious.
Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it’s still useful to use common sense rhetoric that treats it as objective, and you don’t think it’s that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there’s still enough likelihood of Type B that you focus on questions like “If Type B is true, then is an insect conscious?” and would just shorthand this as “Is an insect conscious?” because e.g. if Type A is true, then consciousness research is not that useful in your view.
I’m not sure what you mean by “objective definition” or “objectively correct answer,” but I don’t think I think of consciousness as being “objective” in your sense of the term.
The final question, for me, is “What should I care about?” I elaborate my “idealized” process for answering this question in section 6.1.2. Right now, my leading guess for what I’d conclude upon going through some approximation of that idealized process is that I’d care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).
But of course, I don’t know quite what sense of “valenced conscious experience” I’d end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the “consciousness” part) that I begin to elaborate in section 2.3.1.
Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as “‘type A materialism,’ or perhaps toward the varieties of ‘type Q’ or ‘type C’ materialism that threaten to collapse into ‘type A’ materialism anyway…” (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about “type A materialism” w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.
That said, I do think the “triviality” objection is a serious one (Ctrl+F the report for “triviality objection to functionalism”), and I haven’t studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.
I think Tomasik’s essay is a good explanation of objectivity in this context. The most relevant brief section.
Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.
If you’re Type A, then presumably you don’t think there’s this sort of “not merely conceptually described” consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:
I’ll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would’ve given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would’ve been coming from a perspective of “I really don’t see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it’s plausible, so I guess I should also think it’s at least a little plausible.” Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they’re relatively low probabilitiy, and of course I retain some degree of “and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don’t currently find at all plausible.” As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.
Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don’t think there’s a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it’s only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).
I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think “Oh, I can just assign 0% to fruit flies!” but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.
Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)
My hope was that the Type A-ness / subjectivity of the concept of “consciousness” I’m using would be clear from section 2.3.1 and 2.3.2, and then I can write paragraphs like the one above about fruit fly consciousness, which refers back to the subjective notion of consciousness introduced in section 2.3.
But really, I just find it very cumbersome to write in detail and at length about consciousness in a way that allows every sentence containing consciousness words to clearly be subjective / Type A-style consciousness. It’s similar to what I say in the report about fuzziness:
given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)
But then, throughout the report, I make liberal use of “normal” phrases about consciousness such as what’s conscious vs. not-conscious, “becoming” conscious or not conscious, what’s “in” consciousness or not, etc. It’s just really cumbersome to write in any other way.
Another point is that, well, I’m not just a subjectivist / Type A theorist about consciousness, but about nearly everything. So why shouldn’t we feel fine using more “normal” sentence structures to talk about consciousness, if we feel fine talking about “living things” and “mountains” and “sorting algorithms” and so on that way? I don’t have any trouble talking about the likelihood that there’s a mountain in such-and-such city, even though I think “mountain” is a layer of interpretation we cast upon the world.
That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for “living things,” “mountains,” and even terms that are themselves very important like “factory farming.”
Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of “What are the mental things we care about, and which beings have those?” and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.
Also, to a less extent, (iii) how much we listen to “expert” opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.
I don’t think I understand what you mean by consciousness being objective. When you mention “what processes, materials, etc. we subjectively choose to use as the criteria for consciousness”, this sounds to me as if you’re talking about people having different definitions of consciousness, especially if the criteria are meant as definitive rather than indicative. However presumably in many cases whether the criteria are present will be an objective question.
When you talk about whether “consciousness is an actual property of the world”, do you mean whether it’s part of ontologic base reality?
A good example of what thebestwecan means by “objectivity” is the question “If a tree falls in a forest and no one is around to hear it, does it make a sound?” He and I would say there’s no objective answer to this question because it depends what you mean by “sound”. I think “Is X conscious?” is a tree-falls-in-a-forest kind of question.
When you talk about whether “consciousness is an actual property of the world”, do you mean whether it’s part of ontologic base reality?
Yeah, ontologically primitive, or at least so much of a natural kind, like the difference between gold atoms and potassium atoms, that people wouldn’t really dispute the boundaries of the concept. (Admittedly, there might be edge cases where even what counts as a “gold atom” is up for debate.)
The idea of a natural kind is helpful. The fact that people mean different things by “consciousness” seems unsurprising, as that’s the case for any complex word that people have strong motives to apply (in this case because consciousness sounds valuable). It also tells us little about the moral questions we’re considering here. Do you guys agree or am I missing something?
I agree that it tells us little about the moral questions, but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven’t gotten that far.)
One thing that makes consciousness interesting is that there’s such a wide spectrum of views, from some people thinking that among current entities on Earth, only humans have consciousness, to some people thinking that everything has consciousness.
but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven’t gotten that far.)
Who do and do not agree with that, then? You and thebestwecan clearly do. Do you know the opinions of prominent philosophers in the field? For instance David Chalmers, who sounds like he is amongst these(?)
IMO, the philosophers who accept this understanding are the so-called “type-A physicalists” in Chalmers’s taxonomy. Here’s a list of some such people, but they’re in the minority. Chalmers, Block, Searle, and most other philosophers of mind aren’t type-A physicalists.
IMO, the philosophers who accept this understanding are the so-called “type-A physicalists” in Chalmers’s taxonomy.
I’m not wholly sure I understand the connection between this and denying that consciousness is a natural kind. The best I can do (and perhaps you or thebestwecan can do better? ;-) ) is:
“If consciousness is a natural kind, then the existence of that natural kind is a separate fact from the existence of such-and-such a physical brain state (and vica versa)”
You’re right that there’s probably not a strict logical relationship between those things. Also, I should note that I have a poor understanding of the variety of different type-B views. What I usually have in mind as “type B” is the view that the connection between consciousness and brain processing is only something we can figure out a posteriori, by noticing the correlation between the two. If you hold that view, it presumably means you think consciousness is a definite thing that we discover introspectively. For example, we can say we’re conscious of an apple in front of us but are not conscious of a very fast visual stimulus. Since we generally assume most of these distinctions between conscious and unconscious events are introspectively clear-cut (though some disagree), there would seem to be a fairly sharp distinction within reality itself between conscious vs unconscious? Hence, consciousness would seem more like a natural kind.
In contrast, the type-A people usually believe that consciousness is a label we give to certain physical processes, and given the complexity of cognitive systems, it’s plausible that different people would draw the boundaries between conscious vs unconscious in different places (if they care to make such a distinction at all). Daniel Dennett, Marvin Minsky, and Susan Blackmore are all type-A people and all of them make the case that the boundaries of consciousness are fuzzy (or even that the distinction between conscious and unconscious isn’t useful at all).
In theory, there could be a type-A physicalist who believes that there will turn out to be some extremely clean distinction in the brain that captures the difference between consciousness vs unconsciousness, such that almost everyone would agree that this is the right way to carve things up. In this case, the type-A person could still believe consciousness will turn out to be a natural kind.
(I’m not an expert on either the type A/B distinction or natural kinds, so apologies if I’m misusing concepts here.)
Thanks for doing this AMA. I’m curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question “Is an insect conscious?” or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?
The Open Phil conversation notes with Brian Tomasik say:
(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik’s well-known analogy is that there’s no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there’s something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn’t give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn’t give you an objective definition of a table.)
However, in the report, you write as though there is an objective definition (e.g. in the “Consciousness, innocently defined” section), and I feel most readers of the report will get that impression, e.g. that there’s an objective answer as to whether insects are conscious.
Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it’s still useful to use common sense rhetoric that treats it as objective, and you don’t think it’s that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there’s still enough likelihood of Type B that you focus on questions like “If Type B is true, then is an insect conscious?” and would just shorthand this as “Is an insect conscious?” because e.g. if Type A is true, then consciousness research is not that useful in your view.
I’m not sure what you mean by “objective definition” or “objectively correct answer,” but I don’t think I think of consciousness as being “objective” in your sense of the term.
The final question, for me, is “What should I care about?” I elaborate my “idealized” process for answering this question in section 6.1.2. Right now, my leading guess for what I’d conclude upon going through some approximation of that idealized process is that I’d care about beings with valenced conscious experience, albeit with different moral weights depending on a variety of other factors (early speculations in Appendix Z7).
But of course, I don’t know quite what sense of “valenced conscious experience” I’d end up caring about upon undergoing my idealized process for making moral judgments, and the best I can do at this point is something like the definition by example (at least for the “consciousness” part) that I begin to elaborate in section 2.3.1.
Re: Type A physicalism, aka Type A materialism. As mentioned in section 2.3.2, I do think my current view is best thought of as “‘type A materialism,’ or perhaps toward the varieties of ‘type Q’ or ‘type C’ materialism that threaten to collapse into ‘type A’ materialism anyway…” (see the footnote after this phrase for explanations). One longer article that might help clarify how I think about “type A materialism” w.r.t. consciousness or other things is Mixed Reference: The Great Reductionist Project and its dependencies.
That said, I do think the “triviality” objection is a serious one (Ctrl+F the report for “triviality objection to functionalism”), and I haven’t studied the issue enough to have a preferred answer for it, nor am I confident there will ever be a satisfying answer to it — at least, for the purposes of figuring out what I should care about. Brian wrote a helpful explainer on some of these issues: How to Interpret a Physical System as a Mind. I endorse many of the points he argues for there, though he and I end up with somewhat different intuitions about what we morally care about, as discussed in the notes from our conversation.
I think Tomasik’s essay is a good explanation of objectivity in this context. The most relevant brief section.
If you’re Type A, then presumably you don’t think there’s this sort of “not merely conceptually described” consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:
Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don’t think there’s a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it’s only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).
I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think “Oh, I can just assign 0% to fruit flies!” but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.
Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)
My hope was that the Type A-ness / subjectivity of the concept of “consciousness” I’m using would be clear from section 2.3.1 and 2.3.2, and then I can write paragraphs like the one above about fruit fly consciousness, which refers back to the subjective notion of consciousness introduced in section 2.3.
But really, I just find it very cumbersome to write in detail and at length about consciousness in a way that allows every sentence containing consciousness words to clearly be subjective / Type A-style consciousness. It’s similar to what I say in the report about fuzziness:
But then, throughout the report, I make liberal use of “normal” phrases about consciousness such as what’s conscious vs. not-conscious, “becoming” conscious or not conscious, what’s “in” consciousness or not, etc. It’s just really cumbersome to write in any other way.
Another point is that, well, I’m not just a subjectivist / Type A theorist about consciousness, but about nearly everything. So why shouldn’t we feel fine using more “normal” sentence structures to talk about consciousness, if we feel fine talking about “living things” and “mountains” and “sorting algorithms” and so on that way? I don’t have any trouble talking about the likelihood that there’s a mountain in such-and-such city, even though I think “mountain” is a layer of interpretation we cast upon the world.
That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for “living things,” “mountains,” and even terms that are themselves very important like “factory farming.”
Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of “What are the mental things we care about, and which beings have those?” and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.
Also, to a less extent, (iii) how much we listen to “expert” opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.
I don’t think I understand what you mean by consciousness being objective. When you mention “what processes, materials, etc. we subjectively choose to use as the criteria for consciousness”, this sounds to me as if you’re talking about people having different definitions of consciousness, especially if the criteria are meant as definitive rather than indicative. However presumably in many cases whether the criteria are present will be an objective question.
When you talk about whether “consciousness is an actual property of the world”, do you mean whether it’s part of ontologic base reality?
A good example of what thebestwecan means by “objectivity” is the question “If a tree falls in a forest and no one is around to hear it, does it make a sound?” He and I would say there’s no objective answer to this question because it depends what you mean by “sound”. I think “Is X conscious?” is a tree-falls-in-a-forest kind of question.
Yeah, ontologically primitive, or at least so much of a natural kind, like the difference between gold atoms and potassium atoms, that people wouldn’t really dispute the boundaries of the concept. (Admittedly, there might be edge cases where even what counts as a “gold atom” is up for debate.)
The idea of a natural kind is helpful. The fact that people mean different things by “consciousness” seems unsurprising, as that’s the case for any complex word that people have strong motives to apply (in this case because consciousness sounds valuable). It also tells us little about the moral questions we’re considering here. Do you guys agree or am I missing something?
I agree that it tells us little about the moral questions, but understanding that consciousness is a contested concept rather than a natural kind is itself a significant leap forward in the debate. (Most philosophers haven’t gotten that far.)
One thing that makes consciousness interesting is that there’s such a wide spectrum of views, from some people thinking that among current entities on Earth, only humans have consciousness, to some people thinking that everything has consciousness.
Who do and do not agree with that, then? You and thebestwecan clearly do. Do you know the opinions of prominent philosophers in the field? For instance David Chalmers, who sounds like he is amongst these(?)
IMO, the philosophers who accept this understanding are the so-called “type-A physicalists” in Chalmers’s taxonomy. Here’s a list of some such people, but they’re in the minority. Chalmers, Block, Searle, and most other philosophers of mind aren’t type-A physicalists.
I’m not wholly sure I understand the connection between this and denying that consciousness is a natural kind. The best I can do (and perhaps you or thebestwecan can do better? ;-) ) is:
“If consciousness is a natural kind, then the existence of that natural kind is a separate fact from the existence of such-and-such a physical brain state (and vica versa)”
You’re right that there’s probably not a strict logical relationship between those things. Also, I should note that I have a poor understanding of the variety of different type-B views. What I usually have in mind as “type B” is the view that the connection between consciousness and brain processing is only something we can figure out a posteriori, by noticing the correlation between the two. If you hold that view, it presumably means you think consciousness is a definite thing that we discover introspectively. For example, we can say we’re conscious of an apple in front of us but are not conscious of a very fast visual stimulus. Since we generally assume most of these distinctions between conscious and unconscious events are introspectively clear-cut (though some disagree), there would seem to be a fairly sharp distinction within reality itself between conscious vs unconscious? Hence, consciousness would seem more like a natural kind.
In contrast, the type-A people usually believe that consciousness is a label we give to certain physical processes, and given the complexity of cognitive systems, it’s plausible that different people would draw the boundaries between conscious vs unconscious in different places (if they care to make such a distinction at all). Daniel Dennett, Marvin Minsky, and Susan Blackmore are all type-A people and all of them make the case that the boundaries of consciousness are fuzzy (or even that the distinction between conscious and unconscious isn’t useful at all).
In theory, there could be a type-A physicalist who believes that there will turn out to be some extremely clean distinction in the brain that captures the difference between consciousness vs unconsciousness, such that almost everyone would agree that this is the right way to carve things up. In this case, the type-A person could still believe consciousness will turn out to be a natural kind.
(I’m not an expert on either the type A/B distinction or natural kinds, so apologies if I’m misusing concepts here.)