Contact with reality
(Cross-posted from Hands and Cities)
In thought experiments descended from Nozick’s classic “experience machine,” you consider how being plugged into a machine that generates the experience of a certain kind of life (generally, a very pleasant one) compares with some alternative. Such comparisons are meant to tease apart the purely experiential aspect of life from other factors — in particular, factors related to what we might call “contact with reality.”
This post examines the idea of “contact with reality.” In particular, I try to evoke and defend the possibility (though not the obligation) of caring about contact with reality, regardless of its impact on your own pleasure, or on the lives of others.
I. Experience machines
As a tool for thinking about the value of “contact with reality,” cases involving experience machines are often left importantly underspecified: the question of what sort of “contact with reality” one has is not settled by the fact that one is plugged into a machine. In particular, the following four questions seem important:
Are you alone in your simulated world? E.g., are there other conscious people in this world, or not?
How wrong are you about the simulated world?
How wrong are you about the real world?
Do you know that you’re in the machine, once you’re in it? (The usual assumption is no).
(We can also ask questions about things like autonomy, but I’m going not going to focus on these.)
Questions about the non-machine alternative matter a lot, too. For example, the duties, relationships, and opportunities to do good one has or would have in the non-machine alternative are clearly relevant (though note that these matter in certain simulated worlds, too). And more generally, an informed comparison requires knowing what sorts of experiences, and what types of contact with reality, the non-machine alternative offers.
Finally, various specific questions about the choice set-up matter. Do you start out in the real world, or the machine world? Are you allowed to try out each alternative, to see what it’s like? Can you change you mind later? Can you split your time? How long is one committed to one vs. the other at a stretch? How much psychological continuity (including continuity of memory) is there as you switch? And so forth.
II. Being alone
The versions of the machine world that seem to me most informative involve being (a) alone, and (b) wrong — not just about the real world, but about the simulated world as well.
Let’s start with alone. This means, centrally, that any “people” one interacts with in the simulated world aren’t conscious — a condition that doesn’t follow from the fact that one is in a simulated world. For example, there might be other biological humans plugged into the same world — as in, e.g., the Matrix. But more broadly, sufficiently sophisticated simulated people would plausibly be conscious, too.
One way to make sure you’re alone in the machine world is to posit that the “people” you interact with are phenomenal zombies (e.g. behaviorally identical to conscious people, but without anything it’s “like” to be them). But this gets into unnecessarily complicated philosophical territory. I prefer versions in which the other people are the simulation equivalent of sophisticated but ultimately low-resolution card-board cut-outs, which disappear when you look away, but which you’re fooled — perhaps due to direct, machine-facilitated intervention on your epistemology, rather than due to the complexity and convincingness of the simulated people themselves — into thinking of as real people. This episode of Rick and Morty depicts a scenario in this vicinity; one can also imagine something akin to the Truman Show, but with temporary and low-resolution data structures instead of actors. If you think that even structures of this type would be conscious, try to lower the resolution as far as you can, and to increase the level of epistemic intervention by the machine.
If set-up this way, the machine world is one where, when you look into your partner’s eyes, there’s no one looking back: you’re gazing at the simulated-equivalent of cardboard, with a moving face. You’re in love with a painted doll. You’ve married a mirage; your vows were to a flickering void. When you hold your partner — together, you think, in the midst of it all — your arms are, basically, empty. But the machine makes you think they’re not, and it still feels really good.
Another way of making you alone in the machine world is just to not include people — even simulated people — in that world. Thus, it might be a world where you lie alone on the beach, in total bliss; or a world where you pursue some solitary hobby that the machine makes you very passionate about, like math puzzles or carpentry. This is a helpful possibility to keep in mind, because it teases apart what matters about being alone in the machine world per se, vs. being wrong about what sorts of relationships you have. The latter is ultimately just a way of being wrong more generally. Let’s turn to that aspect now.
III. Being wrong
It’s often assumed that in the machine world, one’s everyday beliefs — e.g., “there’s milk in the fridge,” “I’m going bowling on Friday,” etc — are pervasively false. As Chalmers (2005) discusses, though, this isn’t necessarily so: your milk beliefs, for example, may refer to and correctly represent the simulated milk in the simulated fridge. Trying to settle this question would take us too far afield, but we can avoid it by using versions of the scenario in which one is wrong even about simulated things.
Thus, for example, we can imagine a physicist, plugged into the machine, who is trying to work out the laws of physics that govern her (simulated) world. She works passionately on the project, with many moments of epiphany, joy, discovery, and excitement, as her grand theory takes shape. However, the machine is feeding her data that’s fake even by the simulation’s standards. She receives reports of eclipses that the physics engine of the simulation would not cause; she gets fake pictures from the simulated Hubble; she gets data from the simulated particle collider that the simulated particle collisions wouldn’t produce (the collider was never run). What’s more, the machine causes her to misinterpret the data she receives; and more generally, her theories are inconsistent and riddled with basic reasoning errors. Indeed, from the outside, she resembles someone pitifully deluded, building flimsy, myopic castles on a foundation of lies. Her theory displays no elegance or insight even granted its false assumptions. She thinks that she’s tracing the contours of some grander majesty; she’s delighted, awed; but ultimately, she’s confused — enraptured by something small and made-up and paper-thin.
Alternatively, we can imagine an activist in the machine, who campaigns in support of policies that the machine makes seem superficially inspiring and righteous, but that would actually be disastrous, even for the simulated society. Her life is full of apparent meaning and purpose; apparent friendship and solidarity; apparently hard-won victories for apparent justice; and apparently awesome parties afterwards. It all feels very important and engaging and alive, not to mention pleasant. But really, the victories were easy (indeed, the activist’s strategies were wildly naive, and the machine had to bend over backwards to make it seem like they worked) and false (nothing will change); the cause was unjust (indeed, horrifying); the relevant friends, allies, and potential beneficiaries/victims of the cause didn’t exist, even in the simulation. Even the conversation at the simulated parties was petty and boring (good conversation, let’s imagine, is more expensive to simulate): the machine just made her like it.
In these cases, most of the false beliefs in question are plausibly about the simulated world. But people often have beliefs that target the “real world” more broadly: beliefs like “I’m not in an experience machine,” “this is the only world,” “no one created the world I live in,” and so forth. Thus, for example, we can imagine a metaphysician in the machine, who devotes her entire, very pleasurable career to arguing for her deep conviction that she’s not in a simulation; we can imagine a mathematician in the machine, who the machine tricks into thinking that she is proving theorem after theorem that would hold true in all possible worlds, but which are actually gibberish; we can imagine a holy woman in the machine, who thinks that her ecstatic visions of a giant, roiling bowl of neon spaghetti elves are unadulterated visions of God in His pure essence, when in fact the machine is just randomizing some images from her memory.
Note that in all of these cases, I’m not just talking about people who have false beliefs. It’s not that these people are mistaken about the number of socks in their simulated sock drawers; or the number of sand-grains on the simulated beach. Rather, I’m talking about people whose false beliefs are embedded in sets of attitudes and activities that are what I’ll call “reality-oriented,” and which they take as central to their simulated lives. More on this below.
IV. Well-being and choice
Let’s consider, then, a version of the experience machine that involves (a) simulated people who aren’t actually conscious or even very complicated, and (b) pervasive deception and falsehood, even about the simulated world itself. I hope it’s clear how big of a difference these specifications make, and how strange it is to think try to think about the case without first pinning them down. Choosing to live in an amazing simulated world with real friends and loves and adventures and discoveries is very different from choosing to live in blissed-out delusion, alone: and this especially if you’re allowed, in the former case, but not in the second, to understand your condition and to remember your choice (why would you need to forget?). The thing at stake here is not bits vs. atoms, or the special value of “basement universes.” It’s something else.
Philosophers tend to discuss experience machines in the context of theories of “well-being” or “prudential value” — that is, theories of what makes something good for you; what type of thing purely selfish agents should be concerned with. If well-being is only about pleasure/pain (this view is called “hedonism” about well-being), the thought goes, then entering the machine is, from a prudential perspective, a good move (assuming you’ll do better re: pleasure/pain that way). If, by contrast, entering the machine is sometimes a bad prudential move, then well-being must be about more than pleasure/pain/internal experience.
I’m not, here, centrally concerned to argue about what constitutes “well-being”: indeed, I don’t currently expect the term to carve the space of what we care about at particularly useful joints, though I won’t get into that here. Mostly, I just want to point at and clarify the possibility of caring about “contact with reality,” for reasons that aren’t about promoting one’s own pleasure, or about helping others. Whether we call this a “prudential” pattern of concern doesn’t seem to me especially important at present, except insofar as it has implications for how much weight someone’s commitment to maintaining contact with reality should be given in thinking about how to do right by them (my answer is: “a lot of weight“).
Some people don’t care about contact with reality, except insofar as it matters for promoting their own pleasure or helping others. That’s OK. I’m not arguing that this is bad or wrong, or that it’s important for these people to seek or be given something they don’t actually value. Indeed, when I think about a friend of mine who, if the world’s problems were solved, would happily plug himself into an experience machine, I feel good about him having the chance. That’s what he wants, at least assuming that he’d still make this choice after ideal reflection. I’d miss him, yes. But he’s worked hard trying to help other people; this is what he prefers for himself. Indeed, in his case, the machine seem to me almost an image of rest.
But it’s also OK to care about other things, too. This seems obvious in one sense, but I say it partly because I think it’s possible, especially when in the grip of certain ethical theories, to have a vague sense that maybe you “should,” in some sense, want to plug into the experience machine; but also to feel some resistance to this, maybe even sadness. Maybe you kind of hope that on the “true theory of well-being,” there’s more to life than pleasure and pain; more than the painting on the wall of your mental cave; but you’re worried that there isn’t.
But when you choose whether or not to go into the machine, you don’t have to be trying to get as many “well-being points” as possible, whatever those are. Rather, you can just be choosing between pleasure, and between other things. If you care about the other things, you can choose them. What you’ll get as a result is just: your life, with less pleasure, and more of something else. If you want to learn the real history of Rome, and someone says “here’s a fake, more entertaining history, and a button that will make you think it’s real,” you can just decline, not for the sake of your future pleasure, or the pleasure of others, but because you want to know what happened in Rome. There aren’t any well-being Gods to laugh at you. It’s just you and your book.
For simplicity, I’m going to continue to talk in what follows about some concept in the vicinity of prudence or well-being. But I want to be clear that I’m mostly trying to articulate a type of thing that it’s possible and legitimate to care about non-instrumentally. It’s not something you “have” to care about; but not something you have to stop caring about, either.
V. My preferred set-up
A few more comments on setting up the case.
As noted above, actually choosing to enter an experience machine would, in all likelihood, have important implications for people other than you: specifically, you would be leaving behind your relationships, obligations to others, and opportunities to make the world better, etc (note that the same would hold of the choice to leave the machine, if the simulated world also held persisting relationships, opportunities to help, etc). For some people, this is the main sticking point. If all the problems of the world were solved (or they knew they wouldn’t be able to help with any of these problems), and they owed nothing to anybody, then they would very happily enter or stay in the machine. Until then, duty calls. To screen off this consideration, then, we need to imagine cases in which either duty doesn’t call; or in which we manage to focus on prudential reasons in particular.
The former route is actually somewhat challenging. Thus, for example, if we want you to be alone in the machine, but not alone in the real world, then preventing you from being able to do more good in the real world takes some work, since your real relationships are at risk of being helpful to others. And similarly, we might think that there will always be some chance that you’re wrong about the real world’s level of need. I think it’s better, then, to just try to set your reasons to help others, intuitively, aside.
The specific differences in the goods available in the machine vs. outside of it matter too. Thus, for example, De Brigard (2010) asked subjects whether they would leave their current lives, upon learning that those lives were simulated (whether you are alone or deceived in the simulation is left unspecified in his vignette), in favor of real life as “a prisoner in a maximum security prison in West Virginia” (p. 47). Unsurprisingly, the majority (87%) stayed plugged in. But non-hedonists need not think pleasure irrelevant to what they want out of life, or treat “contact with reality” (especially of the kind available in a maximum security prison) as lexically more important than pleasure and pain. You can like both oranges and apples, and still prefer a hundred apples to a rotten orange.
What’s more, if you ask people to assume that their current life is actually a simulation optimized for maximum pleasure, and then ask them if they’d like to leave it for the less optimized world beyond, it seems reasonable for them to assume, absent further information, that the world beyond is pretty bad. “This is what a fake life optimized for pleasure looks like?” they might say, gesturing at their stale toast, tax forms, and back pain medication. “Man, the real world must be terrible.“
One might think that a natural response to this problem would be to hold all the experiential facts fixed, and vary only the “contact with reality” facts. Thus, for example, one might imagine:
Einstein’s experiences, but in a simulated world feeding him fake data, with a fake wife, family, etc, vs.
Einstein’s experiences, in the real world where they actually occurred.
If there is any prudential reason to prefer (2) to (1), one might think, then hedonism is false (see Lin (2016) for discussion of this type of comparison). However, these sorts of “all else equal” intuitions are complicated somewhat by the fact that if one assigns any credence to non-experiential goods having prudential value, one should take (2) over (1): the relevant “contact with reality” is, in this case, free. That said, these sorts of comparisons can at least highlight the weight we place on non-experiential goods, whatever its source. Thus, for example, (2) might continue to seem prudentially superior to (1) even if we boost the pleasure of the fake discoveries and fake family life in (1) substantially.
Finally, various philosophers sympathetic to hedonism sometimes argue that people’s aversion to the experience machine is driven by status quo bias (they also offer other explanations, like the possibility that people are worried that the experience machine will malfunction — see e.g. Weijers (2014), p. 516; this, though, seems very far away from what’s driving my own reaction, at least). De Brigard (2010), for example, suggests that people surveyed end up split something more like 50-50 about whether to unplug, in various cases where they learn that they’ve already been living in the machine (except in the prison case above, in which case it’s 13% unplug vs. 87% stay-plugged); and Weijers (2014) finds roughly comparable numbers (e.g., ~30%-55% of undergrads opting for the machine) for cases in which the most pleasant experiences in life thus far have been machine-made, and the least pleasant have been non-machine (in some of these cases, you’re saying what would be best for someone other than yourself). By contrast, the plug-in rate for Nozick’s original thought experiment, in which you start out in the real world, was only 16% in Weijers’ surveys (glancing at the paper, looks like he gave paper surveys to undergrads, with ~80 people participating each time, but I haven’t looked in detail).
Let’s stick with choosing for oneself for now: other people, after all, might care about contact with reality to a different degree than oneself. For simplicity, I’ll assume a De Brigard-esque scenario, where you’ve already been in the machine your whole life. From the perspective of status quo bias, this biases somewhat in favor of the experience machine, since it treats life in the machine as the status quo. But I’m OK with that: I expect that people with a clear grip on why they don’t like the machine (as opposed to, e.g., undergraduates or MTurk workers taking a survey) can overcome the bias in question.
VI. My preferred version
Here, then, is a shot at my preferred version of the case:
You learn that you’re been living your whole life in an experience machine, in which you are both alone and systematically deluded, even about your simulated world. None of your friends, family, lovers, etc really exist; you’ve only ever interacted with low-resolution, non-conscious simulations that the machine makes you find convincing and complex. No one here loves you, or cares for you, because there’s no one here at all, except you; there isn’t anyone to miss you when you’re gone, or anyone you should stay to help.
Nothing beyond your experience that matters to you is the way it seems. When you look away from something, it disappears. Every time you try to think things through, the machine will cause you to make mistakes of reasoning that you won’t notice: indeed, you’ve already been making lots of these. You’re hopelessly confused on a basic level, and you’ll stay that way for the rest of your life.
However, if you stay in the machine, the balance of pleasure vs. pain in your life will stay roughly what it’s been so far, and you’ll be allowed, if you want, to forget about the machine and about your choice to stay. You’ll never have another chance to leave.
If you choose to leave, the real world you’d be entering will have a somewhat worse balance of pleasure vs. pain, for you, than your current world. And you won’t be able to improve the real world much, either. Out there, though, you can meet real people, with their own rich and complex lives; you can make real friends, and be part of real relationships, communities, and institutions. You can wander cities with real history; you can hear stories about things that really happened, and tell them; you can stand under real skies, and feel the heat of a real sun. People out there are doing real science, and discovering real things. They’re barely beginning to understand the story they’re a part of, but they can understand. You can understand, too; you can be a part of that story, too. No one knows, yet, what’s going to happen.
If you choose the real world, you can’t come back to the machine.
Obviously, I’m not trying to be rhetorically unbiased, here. Rather, I’m trying to evoke an intuitive sense of what the contrast between this type of experience machine and the real world can mean, and what directing your life based purely on hedonism implies. For the hedonist, the prudential verdict in this case is fixed entirely by the phrase “the real world you’d be entering will have a somewhat worse balance of pleasure vs. pain, for you, than your current world.” That’s all the hedonist needs to know, to know what prudence favors.
Of course, we can construct versions of the case where the relevant factors vary more or less dramatically. Make the simulation much more or less blissful; make the real world more or less painful, or more deluded in its own right. Non-hedonists will differ as to when they stay and go. But the purely prudent hedonist never left.
In many moods, and especially for pretty moderate hedonic differences, cases of this kind currently leave me with a clear preference for the real world. But I don’t think the choice is simple, and depending on the specific nature of the hedonic differences in question, a part of me sometimes hesitates, even as I feel the pull of the contact with reality that the real world offers. In my case, I think this centrally because the experiential texture of life also really matters a lot — though not, I think, in a manner easily captured by straightforward hedonism. Thus, for example, if we specify that both the pleasures and the pains in the real world are more intense and vivid than the ones in the machine, and you’ll have more of both if you leave, but extra more of the pains, such that the overall balance of pleasure vs. pain is still worse, then I feel especially clear about wanting to leave the machine: but I expect that I would also leave a blander and more tepid experience machine for a more intense (but overall more unpleasant) one, at least in some cases (though here questions about what it means to weigh pleasures vs. pains loom large).
That is, what I centrally want out of experience, it seems to me, is not “pleasure” in its most straightforward connotation, but something more like energy, aliveness, vividness, awake-ness, engagement, capacity for attention. If my experience of reality will be more dead and drab and empty, such that upon leaving the machine, the color drains from the world, and I am left with something like perpetual low-grade depression, listlessness, or exhaustion; this, indeed, gives me pause. I’m not spitting on experience here as something shallow. Something I really care about would be lost.
Indeed, if you can keep the experience machine available even after you depart for the real world, it makes sense to me to use it as a kind of fall-back; a place for a certain type of rest and comfort; even, perhaps, a strange type of “home.” And it makes sense to me for other people in the real world to take stints in experience machines, too, if they could. (My girlfriend’s take: “You’ve got to at least try it: if you’re really interested in reality, surely you want to see what an experience machine is like.”) And if you have the option to try out both worlds, and then to make a permanent choice later (or never), you should probably take it.
What’s more, when I try to actually imagine facing a choice of this kind, I notice that I still feel an intense kind of sadness at leaving this beautiful world that I love so much, mirage or no. I look out at the lights of San Francisco, and the stars; I think about everything I have seen and done in my life thus far. Maybe these things never happened; maybe this city doesn’t exist; but there was something beautiful all the same; something I feel some need to say goodbye to.
With people it’s a bit more complicated. If my girlfriend and my friends and my family are mannequins, I feel more acutely a sense of horror and disorientation — much more than if I learn that e.g. the parts of California I haven’t been to don’t exist. But I’m still torn; I still feel loyalty and care towards some possibility of a person, who doesn’t exist, but who could exist, who I thought existed. I want to talk with that person, to explain why I’m leaving. And if the mannequins beg me to stay, and the machine continues to make them seem convincing, I imagine feeling like I’m talking to two things at once — a doll, and a possible human — and trying, maybe desperately, to convince the possible human that they don’t really exist, at least not here. I feel some hope that they’d understand, even if the doll doesn’t. “Oh,” I imagine them saying, on learning of their non-existence. “You’re right. You should go. I love you. I hope we meet each other somewhere.”
VII. Is this really about prudence?
Is prudence even the right term, here, for the type of thing that motivates leaving the machine? Or “well-being?” It feels, to me, too thin, too self-y. It evokes a choice between broccoli and cupcakes; a smart investment; an efficient selfishness. But the right term isn’t “duty,” either, or “morality.” Maybe you have a duty to go to the real world, or some other type of moral reason, stemming from what limited opportunity you’ll have to help. But to me at least, this really doesn’t capture everything at stake. The real world doesn’t just call to us because it “needs us”; the real world calls to us because it’s, well, the real deal, the actual thing. If you’re listening to some tinny pop song, and you learn that God is playing a concert, you don’t just go for the sake of future pleasure, or to better help others: you go because it’s God. Or at least, I do.
Sometimes, philosophers attempt to carve out some further category of “perfectionist” value, which stems not from your own well-being, or the well-being of others. Thus, perhaps a great painting, or a deep mathematical proof, has some kind of perfectionist value, independent of how good it is “for” any particular person. This, though, feels too impersonal to me. Just as it feels strange to choose that the real world solely because it’ll be better “for you,” or because it will be better “for others,” so too it feels strange to do it because it will be better “for the universe,” or from the universe’s perspective, or from no one’s.
Somehow (I don’t have a clear articulation here), this whole ethical ontology feels like it’s missing something: some sense of relation, or of dialogue. Either something is in your well-being bubble, reflected in the number your slot gets on the population ethics whiteboard; or it’s in someone else’s bubble; or — exotically — it’s in no one’s bubble, but plays a role in some “ranking” regardless. But somehow everything is bubbles; atoms; objects.
You want to look the universe in the eye, though, not just to make your life better, or to make someone else’s life better, or to make the universe’s life better; not to make a pretty picture that no one sees; not to carve into the firmament the right type of four-dimensional statue; not just any of these things, or any set.
Why, then?
VIII. Caring about contact with reality
I remember a happy hour I went to a few years ago, in which a group of us ended up talking about status-quo-adjusted experience machines like the one I just described. One of the people in the conversation was a physicist, and he wanted, strongly, to leave such a machine.
Something about his reaction to the case was moving to me, and it made me trust him more. I felt like I saw, in that moment, something about what mattered to him about physics, and about knowledge more generally. I had some vision of him stepping through a doorway, out of the machine, and into the raw wind of the real world, on a planet dusty and cold — knowing why he was doing it, and what he sought. I don’t even remember if we specified much about pleasure.
I imagine the physicist in the machine I discussed above feeling similarly; and so, too, the mathematician in the machine, and the philosopher, and the holy woman. These people are horribly deluded by the machine, yes; but they are, let us imagine, really trying to see and relate to reality — and not purely as a route to a certain type of emotional juice. The physicist wants to understand the real physics; the mathematician, to prove the real theorems; the philosopher, to learn the true metaphysics; the holy woman, to meet the real God. They would not, upon learning of their condition, decide to forget. They’ll take the doorway, and the wind.
On the holy woman in particular: people talk a lot about religion as a comfortable delusion, or a way of playing a kind of emotional, symbolically-mediated pretend. And for some people it is. For others, though, it’s more serious. In particular, I have in mind religious people for whom the idea of worshipping a false God, whatever the pleasures and comforts it brings, seems a horrifying and repugnant delusion; a snug and gauzy cocoon. To be chasing after the myopic flickerings of your mind, when there is a reality out there, is to fail utterly in precisely the effort to step beyond oneself and one’s illusions that animates the whole thing — or at least, one version of it. If there’s no God, if they’re worshipping an idol, they want to know.
We can imagine another reason one might leave the machine: namely, the discovery (let’s imagine it’s true) that out there, beyond the machine, are real-life, high-resolution versions of the friends and lovers and teachers you thought you knew. Versions not so simple, or so familiar; and not, perhaps, so comfortable. But versions that actually look back at you. Of course, the prospect of meeting higher-resolution versions of people you knew in a deluded machine world is unlikely to be emotionally straightforward, especially given that they won’t know who you are. But hopefully, the idea can illustrate another type of pull.
What all of these things — physics, math, philosophy, some types of religion, some types of friendship and love — have in common, I suggest, is that they are reality-oriented. That is, they’re guided by a particular type of non-instrumental relationship to reality, as opposed to appearance; and so, too, are many of their paradigmatic practitioners. Indeed, lots of other things, far less grandiose in connotation (gardening? portraiture? stamp collecting?) are like this too (here I imagine a stamp collector, horrified to learn that the stamps in the machine world were cheap fakes); and the instrumental and the non-instrumental can mix in complex ways.
At a high-level, my basic point is that being non-instrumentally reality-oriented is a legitimate, sensible — and indeed, quite commonplace — way to be. This may seem obvious to many: one can, after all, care non-instrumentally about all kinds of things, the truth, obviously, amongst them. To others, though, my sense is that this pattern of concern can seem mysterious and confused. My hope is that the attempts to illustrate it above can help.
IX. The problem of the numinous
I’ll close with a type of hard-to-articulate doubt I feel about the idea of “contact with reality,” that I’m not sure many will share. In contexts like the physics or math examples above, the idea of “contact with reality” need not go much beyond the idea of “accurate representation,” where the accuracy question matters to someone non-instrumentally. And indeed, this limited notion is all I think necessary to get many intuitive objections to experience machines off the ground. Representations can clearly be more or less accurate, and we can clearly care about that as a final value. For present purposes, we need not say more.
If we try to say more, though, we might start to wonder: how accurate are our representations, really? Say I believe that there’s some milk in the fridge; and say that I’m right. Cool. But beyond my scattered images and ideas about milk, beyond the predictions about and correlations with the milk that my mind has set up, how much “contact” with the milk do I really have? At a fundamental level, the milk is, at least, some twisted crazy quantum thing, which I, personally, don’t have anything close to a clear grip on. And even if I understood our current physics deeply, and even if this were, in some sense, the physics, as opposed our best current approximation, one is still tempted to wonder about the thing in itself: not the framework (however accurate) for predicting behavior, but the thing that behaves; bare being; what Kant called the noumenon. Can we have any contact (in a sense that involves something intuitively like “true seeing”) with that? (Kant thought: no.) And if not, what is this whole “contact” thing about, anyway?
This is the type of thing that I expect some people to really not worry about, and maybe to laugh at. And perhaps it does indeed rest on confusion. I wanted to mention it, though, because it still lurks, for me, as a lingering type of question about what it is to have truth-related “contact” with a world beyond your experience (whatever “experience” is), when that world, in itself, seems likely to be different in kind from the appearances and representations that mediate your relationship to it. A part of this, I think, is that I don’t currently feel like I have a clear, gears-level account of what it is to represent something, whether accurately or inaccurately. I get the idea of useful correlations between things (e.g., states of a cognitive system, and objects in the world), but pretty quickly, especially once we start to talking about “maps” of things like milk or math in any detail, I feel like I’m waving my hands. I know that other people feel like they’re on stronger footing on this front. Maybe someday I’ll feel that way too. For now, I’m still confused.
What’s more, in some cases — e.g., spiritual experience, or love — the idea of “contact with reality” seems to suggest something beyond accurate representation: some deeper type of relationship, dialogue, or communion (see e.g. Buber’s I and Thou for gestures — I specifically recommend the Kaufmann translation, though I haven’t read others; and I recommend Kaufmann’s prologue, too). To look your partner, or the universe, “in the eye,” is not merely to have true beliefs about something. But what, then, is it?
I don’t know. Indeed, given that I don’t even have a clear account of representation at a basic level, it seems hard to get a clear account of something as vague as “looking something in the eye.” But I don’t dismiss whatever “looking something in the eye” is pointing at, either. And the real world — the world where people look back — is the place to understand it better.
- Dismantling Hedonism-inspired Moral Realism by 27 Jan 2022 17:06 UTC; 39 points) (
- 6 Oct 2022 21:13 UTC; 4 points) 's comment on Four Quotes on Preference Utilitarianism by (
This post is great, and I think it frames the idea very well.
My only disagreement is with the following part of the scenario you give:
The inclusion of this seems unhelpful to me, because it makes me wonder about the extent to which a version of me whose internal thought processes are systematically manipulated is really the same person (in the sense that I care about). Insofar as the ways I think and reason are part of my personality and identity, then I might have additional reasons to not want them to be changed (in addition to wanting my beliefs to be accurate).
As you identify, it may still be necessary to interfere with my beliefs for the purposes of maintaining social fictions, but this could plausibly require only minor distortions. Whereas losing control of my mind in the way you describe above seems quite different from just having false beliefs.
Thanks! Re: mental manipulation, do you have similar worries even granted that you’ve already been being manipulated in these ways? We can stipulate that there won’t be any increase in the manipulation in question, if you stay. One analogy might be: extreme cognitive biases that you’ve had all along. They just happen to be machine-imposed.
That said, I don’t think this part is strictly necessary for the thought experiment, so I’m fine with folks leaving it out if it trips them up.
Yes, I think I still have these concerns; if I had extreme cognitive biases all along, then I would want them removed even if it didn’t improve my understanding of the world. It feels similar to if you told me that I’d lived my whole life in a (pleasant) dreamlike fog, and I had the opportunity to wake up. Perhaps this is the same instinct that motivates meditation? I’m not sure.
This is beautifully written, and it helps me feel more appreciative of our world. I think I’d still prefer the experience machine in this scenario though, just due to the hedonic difference.