Thanks for writing this up, this is a really interesting idea.
Personally, I find points 4, 5, and 6 really unconvincing. Are there any stronger arguments for these, that donāt consist of pointing to a weird example and then appealing to the intuition that āit would be weird if this thing was consciousā?
Because to me, my intuition tells me that all these examples would be conscious. This means I find the arguments unconvincing, but also hard to argue against!
But overall I get that given the uncertainty around what consciousness is, it might be a good idea to use implementation considerations to hedge our bets. This is a nice post.
I find points 4, 5, and 6 really unconvincing. Are there any stronger arguments for these, that donāt consist of pointing to a weird example and then appealing to the intuition that āit would be weird if this thing was consciousā?
Iām not particularly sympathetic with arguments that rely on intuitions to tell us about the way the world is, but unfortunately, I think that we donāt have a lot else to go on when we think about consciousness in very different systems. It is too unclear what empirical evidence would be relevant and theory only gets us so far on its own.
That said, I think there are some thought experiments that should be compelling, even though they just elicit intuitions. I believe that the thought experiments I provide are close enough to this for it to be reasonable to put weight on them. The mirror grid, in particular, just seems to me to be the kind of thing where, if you accept that it is conscious, you should probably think everything is conscious. There is nothing particularly mind-like about it, it is just complex enough to read any structure you want into it. And lots of things are complex. (Panpsychism isnāt beyond the pale, but it is not what most people are on board with when they endorse functionalism or wonder if computers could be conscious.)
Another way to think about my central point: there is a history in philosophy of trying to make sense of why random objects (rocks, walls) donāt count as properly implementing the same functional roles that characterize conscious states. There are some accounts that have been given for this, but it is not clear that those accounts wouldnāt predict that contemporary computers couldnāt be conscious either. There are plausible readings of those accounts that suggest that contemporary computers would not be conscious no matter what programs they run. If you donāt particularly trust your intuitions, and you donāt want to accept that rocks and walls properly implement the functional roles of conscious states, you should probably be uncertain over exactly which view is correct. Since many views would rule out consciousness in contemporary computers, you should lower the probability you assign to that.
Thanks for writing this up, this is a really interesting idea.
Personally, I find points 4, 5, and 6 really unconvincing. Are there any stronger arguments for these, that donāt consist of pointing to a weird example and then appealing to the intuition that āit would be weird if this thing was consciousā?
Because to me, my intuition tells me that all these examples would be conscious. This means I find the arguments unconvincing, but also hard to argue against!
But overall I get that given the uncertainty around what consciousness is, it might be a good idea to use implementation considerations to hedge our bets. This is a nice post.
Iām not particularly sympathetic with arguments that rely on intuitions to tell us about the way the world is, but unfortunately, I think that we donāt have a lot else to go on when we think about consciousness in very different systems. It is too unclear what empirical evidence would be relevant and theory only gets us so far on its own.
That said, I think there are some thought experiments that should be compelling, even though they just elicit intuitions. I believe that the thought experiments I provide are close enough to this for it to be reasonable to put weight on them. The mirror grid, in particular, just seems to me to be the kind of thing where, if you accept that it is conscious, you should probably think everything is conscious. There is nothing particularly mind-like about it, it is just complex enough to read any structure you want into it. And lots of things are complex. (Panpsychism isnāt beyond the pale, but it is not what most people are on board with when they endorse functionalism or wonder if computers could be conscious.)
Another way to think about my central point: there is a history in philosophy of trying to make sense of why random objects (rocks, walls) donāt count as properly implementing the same functional roles that characterize conscious states. There are some accounts that have been given for this, but it is not clear that those accounts wouldnāt predict that contemporary computers couldnāt be conscious either. There are plausible readings of those accounts that suggest that contemporary computers would not be conscious no matter what programs they run. If you donāt particularly trust your intuitions, and you donāt want to accept that rocks and walls properly implement the functional roles of conscious states, you should probably be uncertain over exactly which view is correct. Since many views would rule out consciousness in contemporary computers, you should lower the probability you assign to that.