Research Fellow at the Center for AI Safety
rgb
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
Progress on the hard problem
I am much less sure of how to think about this than about the pretty hard problem. This is in part because in general, I’m pretty confused about how philosophical methodology works, what it can achieve, and the extent to which there is progress in philosophy. This uncertainty is not in spite of, but probably because of doing a PhD in philosophy! I have considerable uncertainty about these background issues.
One claim that I would hang my hat on is that the elaboration of (plausible) philosophical positions in greater detail, and more detailed scrutiny of them, is a kind of progress. And in this regard, I think the last 25 years have seen a lot of progress on the hard problem. The possible solution space has been sketched more clearly, and arguments elaborated. One particularly interesting trend is the elaboration of the more ‘extreme’ solutions to the hard problem: panpsychism and illusionism. Panpsychism solves the hard problem by making consciousness fundamental and widespread; illusionism dissolves the hard problem by denying the existence of consciousness.
Funnily enough, panpsychists and illusionists actually agree on a lot—they are both skeptical of programs that seek to identify consciousness with some physical, computational, or neural property; they both think that if consciousness exists it then it has some strange-sounding relation to the physical. For illusionists, this (putative) anomalousness of consciousness is part of why they conclude it must not exist. For panpsychists, this (putative) anomalousness of consciousness is part of why they are led to embrace a position that strikes many as radical. You can think of this situation by analogy: theologically conservative religious believers and hardcore atheists are often united in their criticisms of theologically liberal religious believers. Panpsychists and illusionists are both united in their criticisms of ‘moderate’ solutions to the hard problem.
I think the elaboration of these positions is progress. And this situation also forces non-panpsychist consciousness realists, who reject the ‘extremism’ of both illusionism and panpsychism, to respond and elaborate their views in a stronger way.
For my part, reading the recent literature on illusionism has made me far more sympathetic to it as a position than I was before. (At first glance, illusionism can just sound like an immediate non-starter. Cartoon sketch of an objection: How could consciousness be an ‘illusion’ - illusions are mismatches between appearance and reality, and with consciousness the appearance is the reality. Anyway, illusionists can respond to this objection—that’s a subject for another day). If I continue to be sympathetic to illusionism, then I can say: the growing elaboration of and appeal of illusionism in the last decade represents progress.
But I think there is at least a 40% chance that my mind will have changed significantly regarding illusionism within the next three months.
That’s a great question. I’ll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness [1].
[1] With apologies for introducing yet another ‘problem’ to distinguish between, when I’ve already introduced two! (Perhaps you can put these three problems into Anki?)
Progress on the pretty hard problem
This is my attempt to explain Jonathan Birch’s recent proposal for studying invertebrate consciousness. Let me know if it makes rough sense!
The problem with studying animal consciousness is that it is hard to know how much we can extrapolate from what we know about what suffices for human consciousness. Let’s grant that we know from experiments on humans that you will be conscious of a visual perception if you have a neural system for broadcasting information to multiple sub-systems in the brain. (This is the Global Workspace Theory mentioned above), and that visual perception is broadcast. Great, now we know that this sophisticated human Global Workspace suffices for consciousness. But how much of that is necessary? How much simpler could the Global Workspace be and still result in consciousness?
When we try to take a theory of consciousness “off the shelf” and apply it to animals, we face a choice of how strict to be. We could say that the Global Workspace must be as complicated as the human case. Then no animals count as conscious. We could say that the Global Workspace can be very simple. Then maybe even simple programs count as conscious. To know how strict or liberal to be in applying the theory, we need to know what animals are conscious. Which is the very question!
Some people try to get around this by proposing tests for consciousness that avoid the need for theory—the Turing Test would be an example of this in the AI case. But these usually end up sneaking theory in the backdoor.
Here’s Birch’s proposal for getting around this impass.
Make a minimal theoretical assumption about consciousness.
The ‘facilitation hypothesis’:
Phenomenally conscious perception, relative to unconscious perception, facilitates a “cluster” of cognitive abilities.
It’s a cluster because it seems like “the abilities will come and go together, co-varying in a way that depends on whether or not a stimulus is consciously perceived” (8). Empirically we have evidence that some abilities in the cluster include: trace conditioning, rapid reversal learning, cross-modal learning.
-
Look for these clusters of abilities of animals.
-
See if things which are able to make perceptions unconscious in humans—flashing them quickly and so forth—seems to ‘knock out’ that cluster in animals. If we can make the clusters come and go like this, it’s a pretty reasonable inference that the cause of this is consciousness coming and going.
As I understand it, Birch (a philosopher) is currently working with scientists to flash stuff at bees and so forth. I think Birch’s research proposal is a great conceptual advance and I find the empirical research itself very exciting and am curious to see what comes out of it.
- Aug 26, 2021, 9:20 AM; 6 points) 's comment on The pretty hard problem of consciousness by (
The pretty hard problem of consciousness
Oh and I should add: funnily enough, you are on my list of people to reach out to! :D
Great question, I’m happy to share.
One thing that makes the reaching out easier in my case is that I do have one specific ask: whether they would be interested in (digitally) visiting the reading group. But I also ask if they’d like to talk with me one-on-one about their work. For this ask, I’ll mention a paper of theirs that we have read in the reading group, and how I see it as related to what we are working on. And indicate what broad questions I’m trying to understand better, related to their work.
On the call itself, I am a) trying to get a better understanding of the work and b) let them know what FHI is up to. The very act of preparing for the meeting forces me to understand their work a lot better—I am sure that you have had a similar experience with podcasting! And then the conversations themselves are informative and also enjoyable (for me at least!).
The questions vary according to each person’s work. But one question I’ve asked everyone is:
If you could fund a bunch of work with the aim of making the most progress on consciousness in the next 40 years (especially with an eye to knowing which AIs are conscious), what would you fund? What is most needed for progress?
One last general thought: reaching out to people can be aversive, but in fact it has virtually no downside (as long as you are courteous with your email, of course). The email might get ignored, which is fine. But the best case—and the modal case, I think—is that people are happy that someone is interested in their work.
Thanks Darius! It was my pleasure.
That’s a great point. A related point that I hadn’t really clocked until someone pointed it out to me recently, though it’s obvious in retrospect, is that (EA aside) in an academic department it is structurally unlikely that you will have a colleague who shares your research interests to a large extent. Since it’s rare that a department is big enough to have two people doing the same thing, and departments need coverage of their whole field, for teaching and supervision.
“I’ve learned to motivate myself, create mini-deadlines, etc. This is a constant work in progress—I still have entire days where I don’t focus on what I should be doing—but I’ve gotten way better.”
What do you think has led to this improvement, aside from just time and practice? Favorite tips / tricks / resources?
Writing about my job: Research Fellow, FHI
Thanks for this. I was curious about “Pick a niche or undervalued area and become the most knowledgeable person in it.” Do you feel comfortable saying what the niche was? Or even if not, can you say a bit more about how you went about doing this?
This is very interesting! I’m excited to see connections drawn between AI safety and the law / philosophy of law. It seems there are a lot of fruitful insights to be had.
You write,
The rules of Evidence have evolved over long experience with high-stakes debates, so their substantive findings on the types of arguments that prove problematic for truth-seeking are relevant to Debate.
Can you elaborate a bit on this?
I don’t know anything about the history of these rules about evidence. But why think that over this history, these rules have trended towards truth-seeking per se? I wouldn’t be surprised if the rules have evolved to better serve the purposes of the legal system over time, but presumably the relationship between this end and truth-seeking is quite complex. Also, people changing the rules could be mistaken about what sorts of evidence do in fact tend to lead to wrong decisions.
I think all of this is compatible with your claim. But I’d like to hear more!
Thanks for the great summary! A few questions about it
1. You call mesa-optimization “the best current case for AI risk”. As Ben noted at the time of the interview, this argument hasn’t yet really been fleshed out in detail. And as Rohin subsequently wrote in his opinion of the mesa-optimization paper, “it is not yet clear whether mesa optimizers will actually arise in practice”. Do you have thoughts on what exactly the “Argument for AI Risk from Mesa-Optimization” is, and/or a pointer to the places where, in your opinion, that argument has been made (aside from the original paper)?
2. I don’t entirely understand the remark about the reference class of ‘new intelligent species’. What species are in that reference class? Many species which we regard as quite intelligent (orangutans, octopuses, New Caledonian crows) aren’t risky. Probably, you mean a reference class like “new species as smart as humans” or “new ‘generally intelligent’ species”. But then we have a very small reference class and it’s hard to know how strong that prior should be. In any case, how were you thinking of this reference class argument?
3. ‘The Boss Baby’, starring Alec Baldwin, is available for rental on Amazon Prime Video for $3.99. I suppose this is more of a comment than a question.
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
The meta-problem of consciousness is distinct from both a) the hard problem: roughly, the fundamental relationship between the physical and the phenomenal b) the pretty hard problem, roughly, knowing which systems are phenomenally consciousness
The meta-problem is c) explaining “why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain” (6)
The meta-problem has a very interesting relationship to the hard problem. To see what this relationship is, we need a distinction between what the “hard problem” of explaining consciousness, and what Chalmers calls the ‘easy’ problems of explaining “various objective behavioural or cognitive functions such as learning, memory, perceptual integration, and verbal report”.
(Much like ‘pretty hard’, the ‘easy’ is tongue in cheek—the easy problems are tremendously difficult and thousands of brilliant people with expensive fancy machines are constantly hard at work on them).
Ease of the easy problems: “the easy problems are easy because we have a standard paradigm for explaining them. To explain a function, we just need to find an appropriate neural or computational mechanism that performs that function. We know how to do this at least in principle.”
Hardness of the hard problem: “Even after we have explained all the objective functions that we like, there may still remain a further question: why is all this functioning accompanied by conscious experience?...the standard methods in the cognitive sciences have difficulty in gaining purchase on the hard problem.”
The meta problem is interesting because it is deeply related to the hard problem, but it is strictly speaking an ‘easy’ problem: it is about explaining certain cognitive and behavioral functions. For example: thinking “I am currently seeing purple and it seems strange to me that this experience could simply be explained in terms of physics” or “It sure seems like Mary in the black and white room lacks knowledge of what it’s like to see red”; or sitting down and writing “boy consciousness sure is puzzling, I bet I can funding to work on this.”
Chalmers hopes that cognitive science can make traction on the meta-problem, by explaining how these cognitive functions and behaviors come about in ‘topic neutral’ terms that don’t commit to any particular metaphysical theory of consciousness. And then if we have a solution to the meta problem, this might shed light on the hard problem.
One particular intriguing connection is that it seems like a) a solution to the meta problem should at least be possible and b) if it is, then it gives us a really good reason not to trust our beliefs about consciousness!
A solution to the meta problem is possible, so there is a correct explanation of our beliefs about consciousness that is independent of consciousness.
If there is a correct explanation of our beliefs about consciousness that is independent of consciousness, those beliefs are not justified.
Our beliefs about consciousness are not justified.
Part of the aforementioned growing interest in illusionism is that I think this argument is pretty good. Chalmers came up with it and elaborated it—even though he is not an illusionist—and I like his elaboration of it more than his replies!