Computational functionalism about sentience: for a system to have a given conscious valenced experience is for that system to be in a (possibly very complex) computational state. That assumption is why the Big Question is asked in computational (as opposed to neural or biological) terms.
I think it is a little quick to jump from functionalism to thinking that consciousness is realizable in a modern computer architecture if we program the right functional roles. There might be important differences in how the functional roles are implemented that rules out computers. We don’t want to allow just any arbitrary gerrymandered states to count as an adequate implementation of consciousness’s functional roles; the limits to what is adequate are underexplored.
Suppose that Palgrave Macmillan produced a 40 volume atlas of the bee brain, where each neuron is drawn on some page (in either a firing or silent state) and all connections are accounted for. Every year, they release a new edition from a momentary time slice later, updating all of the firing patterns slightly after looking at the patterns in the last edition. Over hundreds of years, a full second of bee brain activity is accounted for. Is the book conscious? My intuition is NO. There are a lot of things you might think are going wrong here—maybe the neurons printed on each page aren’t doing enough causal work in generating the next edition, maybe the editions are too spatially or temporally separated, etc. I could see some of these explanations as applying equally to contemporary computer architectures.
We don’t want to allow just any arbitrary gerrymandered states to count as an adequate implementation of consciousness’s functional roles
maybe the neurons printed on each page aren’t doing enough causal work in generating the next edition
I agree with the way you’ve formulated the problem, and the possible solution—I’m guessing that an adequate theory of implementation deals with both of them. Some condition about there being the right kind of “reliable, counterfactual-supporting connection between the states” (that quote is from Chalmers’ take on these issues).
But I have not yet figured out how to think about these things to my satisfaction.
I think it is a little quick to jump from functionalism to thinking that consciousness is realizable in a modern computer architecture if we program the right functional roles. There might be important differences in how the functional roles are implemented that rules out computers. We don’t want to allow just any arbitrary gerrymandered states to count as an adequate implementation of consciousness’s functional roles; the limits to what is adequate are underexplored.
Suppose that Palgrave Macmillan produced a 40 volume atlas of the bee brain, where each neuron is drawn on some page (in either a firing or silent state) and all connections are accounted for. Every year, they release a new edition from a momentary time slice later, updating all of the firing patterns slightly after looking at the patterns in the last edition. Over hundreds of years, a full second of bee brain activity is accounted for. Is the book conscious? My intuition is NO. There are a lot of things you might think are going wrong here—maybe the neurons printed on each page aren’t doing enough causal work in generating the next edition, maybe the editions are too spatially or temporally separated, etc. I could see some of these explanations as applying equally to contemporary computer architectures.
Thanks for the comment! I agree with the thrust of this comment.
Learning more and thinking more clearly about implementation of computation in general and neural computation in particular, is perennially on my intellectual to-do list list.
I agree with the way you’ve formulated the problem, and the possible solution—I’m guessing that an adequate theory of implementation deals with both of them. Some condition about there being the right kind of “reliable, counterfactual-supporting connection between the states” (that quote is from Chalmers’ take on these issues).
But I have not yet figured out how to think about these things to my satisfaction.