My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and ‘revealed preference’ research directions. OpenPhil may be more of a stretch to categorize in this way; I’m going off what I recall of Holden’s debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser’s report (he was up-front about his assumptions on this).
Of course it’s harder to pin down what people at these organizations believe than it is in Brian’s case, since Brian writes a great deal about his views.
So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.
Wait, are you equating “functionalism” with “doesn’t believe suffering can be meaningfully defined”? I thought your criticism was mostly about the latter; I don’t think it’s automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.
(You could reasonably argue that it doesn’t look likely that functionalism will provide such a theory, but then I’ve always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our “least wrong” placeholder theory until somebody comes up with something better.)
Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.
If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.
I think whether suffering is a ‘natural kind’ is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.
I’ve always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our “least wrong” placeholder theory until somebody comes up with something better.)
Part of my reason for writing this critique is to argue that functionalism isn’t a useful theory of mind, because it doesn’t do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).
If it is a placeholder, then I think the question becomes, “what would ‘something better’ look like, and what would count as evidence that something is better? I’d love to get your (and FRI’s) input here.
I think whether suffering is a ‘natural kind’ is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.
I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is “real” or not?
If it is a placeholder, then I think the question becomes, “what would ‘something better’ look like, and what would count as evidence that something is better?
What something better would look like—if I knew that, I’d be busy writing a paper about it. :-) That seems to be a part of the problem—everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I’d like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser’s)
everyone (that I know of) agrees that functionalism is deeply unsatisfactory
I don’t. :) I see lots of free parameters for what flavor of functionalism to hold and how to rule on the Aaronson-type cases. But functionalism (perhaps combined with some other random criteria I might reserve the right to apply) perfectly captures my preferred way to think about consciousness.
I think what is unsatisfactory is that we still know so little about neuroscience and, among other things, what it looks like in the brain when we feel ourselves to have qualia.
I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is “real” or not?
Ah, the opposite actually- my expectation is that if ‘consciousness’ isn’t real, ‘suffering’ can’t be real either.
What something better would look like—if I knew that, I’d be busy writing a paper about it. :-) That seems to be a part of the problem—everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I’d like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser’s)
Thanks, this is helpful. :)
The following is tangential, but I thought you’d enjoy this Yuri Harari quote on abstraction and suffering:
In terms of power, it’s obvious that this ability [to create abstractions] made Homo sapiens the most powerful animal in the world, and now gives us control of the entire planet. From an ethical perspective, whether it was good or bad, that’s a far more complicated question. The key issue is that because our power depends on collective fictions, we are not good in distinguishing between fiction and reality. Humans find it very difficult to know what is real and what is just a fictional story in their own minds, and this causes a lot of disasters, wars and problems.
The best test to know whether an entity is real or fictional is the test of suffering. A nation cannot suffer, it cannot feel pain, it cannot feel fear, it has no consciousness. Even if it loses a war, the soldier suffers, the civilians suffer, but the nation cannot suffer. Similarly, a corporation cannot suffer, the pound sterling, when it loses its value, it doesn’t suffer. All these things, they’re fictions. If people bear in mind this distinction, it could improve the way we treat one another and the other animals. It’s not such a good idea to cause suffering to real entities in the service of fictional stories.
The quote seems very myopic. Let’s say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.
I think that’s fair—beneficial equilibriums could depend on reifying things like this.
On the other hand, I’d suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don’t think corporations can suffer, so in many cases it’ll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I’m not sure how strongly I’d stand behind it...)
What something better would look like—if I knew that, I’d be busy writing a paper about it. :-) That seems to be a part of the problem—everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like.
(1) figure out what sort of ontology you think can map to both phenomenology (what we’re trying to explain) and physics (the world we live in);
(2) figure out what subset of that ontology actively contributes to phenomenology;
(3) figure out how to determine the boundary of where minds stop, in terms of that-stuff-that-contributes-to-phenomenology;
(4) figure out how to turn the information inside that boundary into a mathematical object isomorphic to phenomenology (and what the state space of the object is);
(5) figure out how to interpret how properties of this mathematical object map to properties of phenomenology.
The QRI approach is:
(1) Choice of core ontology → physics (since it maps to physical reality cleanly, or some future version like string theory will);
(2) Choice of subset of core ontology that actively contributes to phenomenology → Andres suspects quantum coherence; I’m more agnostic (I think Barrett 2014 makes some good points);
(3) Identification of boundary condition → highly dependent on (2);
(4) Translation of information in partition into a structured mathematical object isomorphic to phenomenology → I like how IIT does this;
(5) Interpretation of what the mathematical output means → Probably, following IIT, the dimensional magnitude of the object could correspond with the degree of consciousness of the system. More interestingly, I think the symmetry of this object may plausibly have an identity relationship with the valence of the experience.
Anyway, certain steps in this may be wrong, but that’s what the basic QRI “full stack” approach looks like. I think we should be able to iterate as we go, since we can test parts of (5) (like the Symmetry Hypothesis of Valence) without necessarily having the whole ‘stack’ figured out.
Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it’s secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it.
But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser’s consciousness document).
I think a default assumption should be that works by individual authors don’t necessarily reflect the views of the organization they’re part of. :) Indeed, Luke’s report says this explicitly:
the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.
Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided.
When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.
MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it’s relevant to the research, whether it’s an official value of the organization or not. In the first case, it’s not obvious why it’s relevant at all.
Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.
I’m a little confused here. Where does MIRI or FHI say anything about consciousness, much less assume any particular view?
My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and ‘revealed preference’ research directions. OpenPhil may be more of a stretch to categorize in this way; I’m going off what I recall of Holden’s debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser’s report (he was up-front about his assumptions on this).
Of course it’s harder to pin down what people at these organizations believe than it is in Brian’s case, since Brian writes a great deal about his views.
So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.
Wait, are you equating “functionalism” with “doesn’t believe suffering can be meaningfully defined”? I thought your criticism was mostly about the latter; I don’t think it’s automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.
(You could reasonably argue that it doesn’t look likely that functionalism will provide such a theory, but then I’ve always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our “least wrong” placeholder theory until somebody comes up with something better.)
Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.
I think whether suffering is a ‘natural kind’ is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.
Part of my reason for writing this critique is to argue that functionalism isn’t a useful theory of mind, because it doesn’t do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).
If it is a placeholder, then I think the question becomes, “what would ‘something better’ look like, and what would count as evidence that something is better? I’d love to get your (and FRI’s) input here.
I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is “real” or not?
What something better would look like—if I knew that, I’d be busy writing a paper about it. :-) That seems to be a part of the problem—everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I’d like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser’s)
I don’t. :) I see lots of free parameters for what flavor of functionalism to hold and how to rule on the Aaronson-type cases. But functionalism (perhaps combined with some other random criteria I might reserve the right to apply) perfectly captures my preferred way to think about consciousness.
I think what is unsatisfactory is that we still know so little about neuroscience and, among other things, what it looks like in the brain when we feel ourselves to have qualia.
Ah, the opposite actually- my expectation is that if ‘consciousness’ isn’t real, ‘suffering’ can’t be real either.
Thanks, this is helpful. :)
The following is tangential, but I thought you’d enjoy this Yuri Harari quote on abstraction and suffering:
The quote seems very myopic. Let’s say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.
I think that’s fair—beneficial equilibriums could depend on reifying things like this.
On the other hand, I’d suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don’t think corporations can suffer, so in many cases it’ll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I’m not sure how strongly I’d stand behind it...)
Yeah, S-risk minimizer being trivially exploitable etc.
An additional note on this:
I’d propose that if we split the problem of building a theory of consciousness up into subproblems, the task gets a lot easier. This does depend on elegant problem decompositon. Here are the subproblems I propose: http://opentheory.net/wp-content/uploads/2016/11/Eight-Problems2-1.png
A quick-and-messy version of my framework:
(1) figure out what sort of ontology you think can map to both phenomenology (what we’re trying to explain) and physics (the world we live in);
(2) figure out what subset of that ontology actively contributes to phenomenology;
(3) figure out how to determine the boundary of where minds stop, in terms of that-stuff-that-contributes-to-phenomenology;
(4) figure out how to turn the information inside that boundary into a mathematical object isomorphic to phenomenology (and what the state space of the object is);
(5) figure out how to interpret how properties of this mathematical object map to properties of phenomenology.
The QRI approach is:
(1) Choice of core ontology → physics (since it maps to physical reality cleanly, or some future version like string theory will);
(2) Choice of subset of core ontology that actively contributes to phenomenology → Andres suspects quantum coherence; I’m more agnostic (I think Barrett 2014 makes some good points);
(3) Identification of boundary condition → highly dependent on (2);
(4) Translation of information in partition into a structured mathematical object isomorphic to phenomenology → I like how IIT does this;
(5) Interpretation of what the mathematical output means → Probably, following IIT, the dimensional magnitude of the object could correspond with the degree of consciousness of the system. More interestingly, I think the symmetry of this object may plausibly have an identity relationship with the valence of the experience.
Anyway, certain steps in this may be wrong, but that’s what the basic QRI “full stack” approach looks like. I think we should be able to iterate as we go, since we can test parts of (5) (like the Symmetry Hypothesis of Valence) without necessarily having the whole ‘stack’ figured out.
Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it’s secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it.
But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser’s consciousness document).
I think a default assumption should be that works by individual authors don’t necessarily reflect the views of the organization they’re part of. :) Indeed, Luke’s report says this explicitly:
Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided.
When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.
MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it’s relevant to the research, whether it’s an official value of the organization or not. In the first case, it’s not obvious why it’s relevant at all.
Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.