On this episode of the Utilitarian Podcast I talk with Neil Sinhababu. Neil is a professor of philosophy at the University of Singapore. Our conversation has two broad topics. We talk about metaethics and we talk about a world government as a way to prevent human extinction.
We discuss consciousness as the basis for ethics, reductionism about ethics, whether morality can be a science and how to handle feeling alienated from your own values. We then discuss world government as a way to solve collective action problems and to decrease extinction risk. And I ask whether creating a world government is itself risky because it might turn totalitarian.
The sound quality on my side is not the best in this episode, but fortunately, Neil has lots of interesting things to say, so he speaks the most.
As always you can reach me at utilitarianpodcast@gmail.com, if you have questions or suggestions or criticism.
Here’s the transcript:
Gus: Neil, thank you for coming on the Utilitarian Podcast.
Neil: Wonderful to be here, Gus. Thanks for inviting me.
Intro to metaethics
Gus: Great. Okay. We’re going to talk about metaethics and just to begin with, could you introduce metaethics?
Neil: Yeah. In normative ethics or ethics is people usually think about it, we’re trying to figure out which actions are right and wrong, which states of affairs are good and should be created, which ones are bad and should be avoided, which people are the good virtuous people and which ones are the bad people. And there’s also broader social questions about what justice is that blend into political philosophy at some point.
But meta ethics is where we ask some questions about the answers to those questions. So questions of, if it really is wrong to lie is that an objective fact about the world? And if so, is it an objective fact that we can find in some scientific way that’s about concrete stuff in the world that is empirically knowable?
Or is it something that we have to do something else in order to know, for example that we would have to use a special kind of intuition that we use say in mathematics, according to some especially Platonist theories of mathematics or the old traditional way of saying it was beyond science was to say that only God could set up good and evil.
So yeah, there’s a lot of options. If they are objective facts, how they could be objective facts. And there’s a lot of options where they aren’t objective facts, maybe our moral judgments are just expressions of our desires or emotions. And in that case, it’s fine to have moral judgments that don’t correspond to objective truth. Your desires usually don’t correspond to objective truth.
And if you’re just saying hooray for that, Hey no objectivity requirements. I think your moral judgments might be just fine because they’re just mere hoorays and boos. So that was the old non-cognitivist view. People have moved to more sophisticated things, but there’s a lot of options. There’s also the relativist option.
Maybe these aren’t objective and they’re like fashion. They’re just the preferences of a culture at a time. So all kinds of things you could say. Really among the views that say they aren’t objective, my favorite one is error theory that says there are just no moral facts, nothing non-objective could live up to being morality.
So really since we just have a bunch of non-objective things where morality supposed to be, we really have to say there is no moral truth because it just doesn’t live up to what we thought moral truth would be. And that seems to me, a very respectable kind of view.
Is metaethics useful?
Gus: And so I’ll be skeptical about metaethics as a field and ask: why should I care?
Why should people in general care about metaethics? Isn’t this some very academic endeavor that doesn’t really affect what we’re trying to do in the world?
Neil: It would be like that if it didn’t affect normative ethics, and a lot of people have pursued metaethics in a way that separates these two domains, where they say, here, we’re just trying to do something like, well in philosophy of mathematics, whatever the philosophers of mathematics say, the actual mathematicians, it’s not going to affect them.
They just go on and do what they’re doing. And the philosophers will argue about whether they are engaging with Plato’s forms or whether they’re just manipulating symbols, but either way the math, the mathematical truths come out the same. It’s just a question of whether they are a Plato’s forms truths, or whether they’re, truths about the symbols.
So some people see it that way. And I don’t think that it’s all going to be the same, normative ethics wise, how the metaethics turns out, because if there are objective moral facts and they’re something like scientific facts, in that case, the right way to investigate them will be something scientific and perhaps a natural thing to expect is that they’d surprise us as much as science surprises us.
When we discovered that water isn’t a simple substance, it’s actually, if you took two different gas molecules take the hydrogen gas, split it in half, take the two atoms, take the oxygen, split it in half, take one of those two atoms and stick them all together.
That’s what water is. Okay. That’s surprising. Nobody thought it would be like pieces of things we know of as gases that would have astonished people in the 1700s. But so it is, and maybe ethics, if it turns out to be a scientific kind of endeavor, we will find the surprises. If it can be empirically known, if it’s about concrete objectives or stuff that can be surprising to us.
And that’s how I think ethics in the end turns out. It’s not something that matches our intuitions very neatly and approaches on which it matches our intuitions, you’re either giving the mind amazing powers to figure out moral truths or you’re weakening the moral truth so they can just be shadows of your thoughts and giving up on objectivity and I’m not happy with either of those options.
I don’t want to make the heroic assumptions that allow for us to know these amazing facts. And I don’t think we can settle for real ethics that isn’t objective. A true morality would have to be all of those, knowable in some way that is genuinely knowable and not through some kind of faking and it would have to be genuinely objective and universal.
Gus: Another objection would be to think that whatever the metaethical truth, we can all agree on what we want the world to look like in practice. So we want, everyone wants a flourishing society and we can work towards that as a vague goal without settling these difficult philosophical issues.
Neil: So there might be situations in which that is possible, situations in which all the reasonable, ethical views of point in the same direction. We would be lucky as a political community, I suppose if things turned out that way. And I don’t think things actually turn out that way.
I think moral disagreement is in fact in politics and in society today reasonably widespread. And if you look historically not at the sort of narrow sets of views that we have in our current time that I guess the current state of the world imposes some constraints on there are certain kinds of views that you couldn’t have and participate in the modern global economy, views that are so hostile to outsiders that they just prevent you from engaging with others.
Those, those views have been held at some level in the past. A lot of societies go to the point where they think it’s okay to commit genocides against others and even heroic to commit genocide, even obligatory to commit genocide. That really weighs heavily on me in a way that it doesn’t weigh on I think a lot of people working in metaethics.
The way I see what we’re supposed to do in normative ethics is we have to get, we can’t just sit on this idea that our intuitions are very widely shared and that lots of people have nice intuitions like us. Just a look back before 1945, not just the regimes immediately before, but the entire sweep of human history before just shows you lots of people who think killing others really is the heroic thing to do, killing entire other societies.
It’s just, it’s a crazy world out there. It’s actually only around that time around world war two, a little bit before that Raphael Lemkin coins the term genocide because there isn’t even a term that carries the kind of weight that we have now for that.
It’s funny, Lemkin is thinking of an alternate word for it first, and at first he tries out vandalism as a word for genocide. What a strange linguistic fact that we switched over to the other word now. And yeah, that’s what it is. It’s just, yeah, the past is just monstrous at some levels. And that kind of monstrosity shows us how far human moral views can diverge how important it is to try to find something that will take us reliably towards the truth, because I don’t trust the intuitions of a species that falls into pro-genocide views, as often as humans do.
Naturalistic realism
Gus: Good point. If we look at the philosophical community, I see two broad tents, we could say, of views. One is realist and non-naturalist, and the other is naturalist and anti-realist. And I am worried about people perceiving this as a dichotomy where you can either be a moral realist or you can be a naturalist.
Is there a way forward for a naturalistic realism?
Neil: Gus, you have spoken to my heart. The project with which I began philosophy as like an 18 year old, when I decided to major in the subject, was; can I find the moral truth, the truth about good and evil in the natural world in a way continuous with the sciences, broadly speaking. I didn’t even really understand that was what I was trying to do, but that was just what the aim was from the beginning.
Broadly empiricist kind of way of finding objective moral truth is what I was interested in. And I think that can be done. Metaethics today really doesn’t think it can be done. And your description of what the field is like at present is I think accurate to the way it has moved over the last 30 years.
And to go into why it moved that way, I think the focus on reasons for action as the fundamental thing to look for in metaethics, really made that happen because the way reasons for actions are just lend themselves to non-naturalist treatments. The kinds of things you’d see in Jonathan Dancy, Tim Scanlon, many other philosophers go that way.
And there’s also anti-realist ways. I feel like Christine Korsgaard did a lot to suggest something that really took the psychology of reflection and deliberation seriously. And it seems that the most natural way to develop that was just to go anti-realist with the psychology of reflection and deliberation being given a certain kind of non-cognitivist interpretation or something like that.
So there were views like that out there too. People also do the deliberation thing in a non-naturalist way, but at any rate, that’s the kind of set of approaches we have. You start out from judgments about reasons and you give them either a non-natural metaethical treatment where they’re describing abstract facts that you only know through reason, the Plato’s forms kind of thing.
David, Enoch describes himself as a Platonist and that’s the kind of picture we have there. Michael Huemer has similar views. All those, the whole range of views is available on one side, or you say really, we can’t figure out how this is describing the natural world. And we don’t want to go in for the non-naturalism if we’re naturalists about the metaphysics and then you go for anti-realism, like you said, and non-cognitivists are doing this.
Reasons for action
Gus: So these two camps, you think there’s a kind of a third way?
Neil: Absolutely. And one of the things I want to do to get there is push back against the idea that what we’re doing in metaethics is trying to characterize reasons for action. And I want to go into why that’s really not a good way to go here. I think there’s a big problem with understanding action as fundamental metaethically, because the psychology that engages with action most directly isn’t the psychology of belief.
And my work on the Humean theory of motivation was really where this came out to me very clearly. So if you look at the production of action, the real things that seem to direct and drive us, are our desires or as Hume called, them our passions and the role of belief in the direction of action is just I have this means-end belief about this is how I attain the end that I desire or have a passion for.
So the direction seems to be coming from passion or desire. And the rule of belief is just, okay if you want that, this is how you get it. Now, what reasons for action are supposed to do is, they’re supposed to direct your action. They’re supposed to be the things that pick which action you’re going to do.
And not just, this is how I, find the means to achieve my antecedently desired end. No they’re supposed to set your end, set the goals of action. Now here’s the problem with this inhuman psychology. The way that humans are getting the goals of action, it seems to just be, they have these desires and the desires drive them.
Michael Smith and some other philosophers have thought the way it actually happens is you have a belief about reasons for action or some other thing. And this, you can have desires driving you, but there is a way to get motivation. If you have a belief about reasons for action, that can just create a desire and then the desire will drive you. And so Smith presents himself as a Humean theorist about motivation. For that reason, he says I’m doing the desire belief thing too, but he allows a belief to generate desires by reasoning.
So ultimately it’s belief about reasons that’s driving us and Smith’s view was regarded as a Humean desire-belief view in good standing for a very long time. And still largely is. The thing that happened there though, it made it look like the desire-belief psychology of human action. It made it look like that psychology was compatible with the reasons for action view, because you just get a belief about a reason, and that would create a new desire just automatically through, not automatically, but through quick, simple inference, the way inference usually works.
And you just get this and if you didn’t do it, you were irrational or something. That’s how Smith sees it. If you can’t, if you think you have a reason to do this, but you weren’t motivated, well then you have akrasia, a weakness of will and you’re irrational. And that was the way that Smith thought about it and the way that a lot of people thought about it.
Even those who call themselves Humeans, but looking at actual human motivation, I don’t think that’s the case and the really good empirical case for this. I’m not going to go to any of the psychology that got into trouble with the replication crisis. A much stronger case is provided by the failure of gay conversion therapy.
As I see it, because this was a project by people who really were invested in changing people’s desires, in some cases in line with their moral beliefs, because often people who were set in for this, many were unwilling and forced in, in some kind of way. But there were people who were doing this because they wanted to be right with God and right with the holy way of doing things and what was good and what was right and what was virtuous, that’s how they saw what they were doing.
And they wanted to get rid of their homosexual desires. They had the moral beliefs, the beliefs, that there is a reason, all different kinds of moral reasons, practical reasons, dealing with the afterlife, any kind of reason you want to put in there, you can find it there. A reason to have different desires, to do different actions, to be heterosexual, to get married to someone of the opposite gender, all kinds of things you can put them in, however you want.
And the attempt to generate new desires, new passions from these moral beliefs completely failed. Even with the experimenters, as it were, being completely motivated to get their result. This was a case where they really wanted to get that result or else what they were doing is complete garbage and it was complete garbage.
Their result could not be found. It completely failed. The only people who believe in this are people now within the evangelical network. And that’s not where you want to be, I think, as any kind of naturalistically inclined person. So that’s really where I see the failure of moral belief to generate certain kinds of practical consequences that were boasted for it.
I don’t think the content of moral belief can be analyzed as fundamentally practical or action-guiding because we’re just not seeing the outputs from the moral attitudes that suggest that. They don’t directly motivate actions and they don’t do the Michael Smith thing of generating new desires through reasoning.
So there is really nothing here. No case here for the content of moral belief to be a distinctively, practical, motivational thing in the way that anti-Humeans and even Michael Smith and the other Humeans who allow that intermediate position suggest. So really I think this, that whole project of understanding moral judgment as fundamentally practical is empirically now defeated.
Feelings as the basis of morality
Neil: What we need to go do is go in a different way. Not the way that Immanuel Kant went, where he thought all these judgments were about reasons for action. It’s about feeling. It’s about the perceptual, experiential side. That’s really where the content of moral judgment I think is to be found. And when you look at human psychology, you just see that the perceptual, experiential side that’s incredibly fertile, tons of stuff is going on there.
Perception goes on in the human mind so many times per second, perceptual belief formation. Right now, I’m having all kinds of beliefs immediately in the moment formed about the face of Gus and the thoughts of Gus as he looks back at me and I see myself and I’m like, oh, I’m moving around a lot. You watch a sports game or something like that, where things are moving and your beliefs are just flickering so fast moving.
That’s where you find fast activity in belief that is just going on a whole lot, nothing really strained about it. The human mind is just set up to deliver perception to belief and the way it does that is some content comes in perception. You represent the world a certain way, and then you believe that the world is the way you see it.
That’s what you usually do. There are a couple of cases where maybe, you have some reason to doubt that the world is the way it seems, but for the most part, the world seems a way, and you just take that into belief. Once you pay attention to how the world seems. Now, what happens when a feeling like guilt comes in perceptually?
It’s an experience just like my experience of my shirt as blue. If you have an experience of a bluish color on my shirt or a sort of a yellowish color on the map behind me, you form beliefs about the map being yellow, my shirt being blue, as a result. So that is just how beliefs are quickly formed.
And if a feeling of guilt comes in about something, you did, you remember. I have this often, I remember something I did 15 years ago where I said something, that was mean that I didn’t realize it was mean. And I’m like, oh, I have that feeling. And in there it just looks to me like I did something wrong, so clearly.
Wish I hadn’t done that. And you feel, you believe that you said something wrong, back then when that feeling strikes you, because that’s how it looks. That’s how I think moral judgment really works. And now we’re not thinking about the content of moral judgment as okay, this is action-guiding, even though it’s about an action.
Really, the way tounderstand it. It’s like color. It’s perceptual. A feeling came in and I believed that what there was in the world was something that matched my feeling, a wrongness in the action. And that’s what I see wrongness as. It’s not, fundamentally, there is a universal or categorical reason not to do this. You can build that up at some level, but that doesn’t characterize the fundamental nature of the thing.
It’s that this action has the disgusting color of guilt. Is this basically what it is. So that’s how I see the content of moral judgment. It’s really a perceptual thing to be understood along the lines of color. And if you do it that way now you’re not in the psychology of action. You’re in the psychology of perception and in the psychology of perception, there’s a lot more opportunities to get realism going because all this perceptual content can be evaluated for accuracy which is just one step removed from truth.
When it’s in perception, you call it accuracy. When it’s in belief, you call it truth. Your perception of there being a map behind me. You immediately form the belief. There is a map behind me. The perceptual state could be accurate or inaccurate depending on what it corresponds with, whether it corresponds with reality, the belief can be true or false depending on whether it corresponds with reality.
And I see what we’re trying to do in getting objectivity, not well, truth is what objectivity looks like in belief, but in perception, accuracy can be objectivity because it’s correspondence fundamentally, your mind corresponding to reality is just a really awesome strong kind of objectivity. And let’s go for that.
Let’s try to get these judgements about; I should feel guilty about that, or guilt is the thing to feel about that action I did, or about this beautiful future that could be, hope is the feeling to have. I’m a utilitarian, I like some of these Brave New World-ish futures and other people are like, no, that’s horrible.
They see it with horror. The question to be asked is this future something to hope for, or to be horrified by? And that’s where, in these science fiction futures where things are strange to many people, they’re horrified that I hope for that because there’s more pleasure in it. Which is the right feeling to have, I think is the right way to understand the content of moral judgment and not the reasons for action framework that metaethicists for the last 30 years have been digging themselves deeper and deeper into.
I think there is nothing there for finding any sort of important, interesting, moral truth. You can go anti-realist in the end and that’s all you can do, but there is objective and universal truth to be found in the naturalistic realist way. If you look at this as fundamentally perceptual.
Accurate feelings
Gus: So what I believe is that what’s fundamental here in these feelings is pain and pleasure and more complex feelings such as guilt or horror is colored by pain and pleasure. So we can separate feelings into feelings that feel good or feelings that feel bad. And this feeling of goodness is the central fundamental objective value, that’s within our conscious experience. And so do you agree that pain and pleasure is the fundamental value and guilt and horror, for example, you could call them secondary or more complex phenomena.
Neil: Absolutely. I think that the pleasure and displeasure, as like to say just to get beyond the bodily connotations of pain, but you’re getting it basically, yes. I think those are, as far as moral facts go, I am a utilitarian, so I take pleasure and displeasure to be the fundamental value and disvalue that there are, I’m a hedonic utilitarian.
Yes, that’s right. And the interesting thing about our positively and negatively valenced moral judgments is that all of them, as far as I can tell, follow this rule where if they’re positively valenced, if they present an action as right, state of affairs as good, a person as virtuous. In that case they’re pleasant.
The feelings that we have, that reveal the world to us, that way are pleasant feelings. So if you think about the future and this future state of affairs, perhaps where there are people living in a very strange way, but having a whole lot of pleasure, maybe they’re all in the experience machine or something like that.
And they’re all very happy. I think that’s good. And to get into why I think that’s something to hope for. This is where the feeling-view can get correspondence going in a way that is just not even a possibility on the reasons for action view. Suppose I hope for the experience machine future.
The core thing in my feeling of hope that makes it a positively valenced representation is the pleasure in it. I am pleased by that possibility. Now feelings like horror. If you’re horrified by everybody in the experience machine you’re displeased, it couldn’t be horror with just pure pleasure. If there was no displeasure in it, it would be something other than horror.
To be hope, it has to be pleasant and to be horror, it has to be unpleasant or at least push your pleasure up or push your pleasure down. That’s what hope and horror do I think. So that is fundamental to their nature. And without that they aren’t moral feelings. So why should we hope for the experience machine future?
In the experience machine future, by stipulation, there’s a lot more pleasure. And if you hope for it, the hope in your feelings is in objective correspondence with the thing it represents, the experience machine future. There’s pleasure in your feeling. There is pleasure in the future. And I think that match is what makes hoping for the experience machine future objectively accurate. There’s pleasure in both. Match. But the person who is horrified by the experience machine future has a negative judgment and displeasure judgment, a displeasure-laden judgment about a pleasure-laden scenario, and that’s a mismatch.
And that’s why that person is out of correspondence with reality, similar for the person who is neutral about the experience machine, future, that person isn’t so horribly mismatched, but there’s still a mismatch.
So yeah, that’s how I get hedonism out of the positive valence, negative valence neutrality framework and get it to correspond, to pleasure, neutrality and displeasure in the world. More pleasure in reality is the thing to hope for, creating more pleasure is something to be proud of, being the kind of person or those kinds of people who are disposed to create more pleasure, those are the people to admire.
And all those feelings: hope, pride and admiration are pleasant. They correspond and match. Meanwhile, a horror at great suffering in the future, guilt about causing displeasure, hatred of those who would intentionally cause displeasure to others, and contempt towards those who just systematically cause displeasure to others perhaps unintentionally, but just by being really, dumb and mean or something like that.
I guess if they’re mean hatred is more it, but if they’re just careless about it and just causing displeasure all over the place, contempt could be the attitude. There’s a match between the pleasure and displeasure in the attitude and the pleasure and displeasure of the world.
And I think that’s what the correspondence between the mind and the world and ethics is fundamentally grounded in.
Objectivity
Gus: What you’re doing, or what I see you as doing, is understanding the complex moral concepts, such as guilt and horror in terms of the basic moral concept, that pleasure is goodness. It’s very interesting to me, this fundamental moral fact that the pleasure is goodness. How could this be an objective fact?
Neil: Good. So it’s objectivity if we define that the way, for example, Sharon Street defines objectivity. She calls it attitude, independence, or stance independence.
So that’s it’s good independently of what anyone thinks or feels about it. I think the goodness of pleasure is that way, it’s good independently of what anyone actually does think or feel about it. So suppose we’re in a world where everybody is horrified by the experience machine future, where everybody is experiencing a lot of pleasure, but they’re disconnected from reality.
Okay. That doesn’t make the experience machine future bad, because in that future, there still is a lot of pleasure. And what are our people doing? They’re having the down judgment towards the up thing. Mismatch. They’re wrong. Their judgements, no matter what people’s judgements are, they don’t change the truth.
So it looks that way like objectivity and that’s what proper objectivity, is supposed to be. Now as theorists, there’s something we can do to discover this. And there, you might begin in the feelings, but what we’re doing there, just to understand why this doesn’t lose objectivity. Let’s just think about how you’d figure out the truth of a belief.
Now to know whether a belief is true we’d have to know what the content of the belief is. If we just say “Gus has a belief” and ask Neil, “Neil is Gus’, belief true?” , come on and tell me what the belief is. Then I can give you a better answer, yeah, you gotta go to the content first for me to give a good answer. We’ve got to figure out, if we’re talking about, are moral judgments, true or false, or can they correspond to things in the world. We’ve got to investigate what their content is, or we’re not doing a good job.
So I think here, what I’m trying to do with all this psychological background of the gay conversion therapy example is trying to get clear on what the content of moral judgment is. Is it really the practical thing that say Christine Korsgaard, Michael Smith, Tim Scanlon, all the big metaethicists almost of the last 30 years they’ve told us, and I think they’re wrong.
They really did not follow the lesson of gay conversion therapy, which was out there to be seen by everybody. And it shows them that moral judgment when you have, it just is practically not very powerful. It really doesn’t have any practical oomph of its own. That all comes from a desire one way or the other.
And so now let’s do what we do when we have things formed by perception, let’s investigate whether the perception was accurate. That would be a nice way to get a handle on this and understand what makes these perceptions accurate because the content of the perception is going to be absorbed by the content of the belief.
That’s how it is with color. You see a blue thing, you think the thing is blue, you see a yellow thing, you think the thing is yellow. You see a thing that is misleadingly colored, maybe a white thing in red light and you form a false belief about it, maybe, we need to investigate it that way and see what’s going on with people’s beliefs, understand the content.
And once we understand the content “this belief is about blue”. “This one is about yellow”. “This one’s about red”, and “this one is about what to feel guilty about”. Once we see all that, we can look for the right things in the world and see if the world matches up. And while the reasons for action views were having trouble matching up with the world, because where are the reasons for action, you know, we’ve invented something here that I don’t think fits naturally into the world.
When we look for accuracy conditions for feeling, understand them in terms of correspondence with the perceptual states that led the belief, namely these feelings, the phenomenology of emotion, guilt, hope, horror, admiration, hatred, contempt, pride, all that. We just try to match the feelings to reality, and we find objective universal matches.
That’s the ethical truth that’s to be found there. And it’s a hedonistic ethical truth. That’s what the simple stripped down way of looking at the natural world gives you. The scientific worldview, as far as I can tell, gives you a universal and objective morality, and it is hedonic utilitarianism.
Gus: Take the feeling of pain. When we are experiencing pain, how could this be attitude independent? Isn’t that exactly a reaction to the world that’s as subjective as they come? So, if I’ve burned my hand, for example I feel pain and this is a subjective attitude. How could this be the grounds of an objective morality?
Neil: You’re right. That is a subjective attitude. And at some level, I don’t think that attitude is the grounds of an objective morality. The attitude of pain as such, isn’t really anything I emphasize that heavily. Rather let’s just look at the feelings involved. Just the qualia of perhaps might be the way to talk about this because that’s where I’d see the pleasure and the displeasure.
They’re just experiences, feelings, just like brightness or volume or something like that. There is just dimensions of experience like that or components of experience. That’s the place to find this. Okay. Now let’s look at pain just to get into the answer to your question. I want to go into the ways that pain really is subjective, like you’re saying.
Suppose I touched something hot and feel pain maybe a different kind of creature that was used to very high heat might not feel any pain at that point. So yeah, in a way that is painful, touching that thing is painful. That is subjective. Utilitarianism wasn’t a theory about the rightness or wrongness or goodness or badness of touching that thing, which is subjectively painful, it’s about the goodness of pleasure and the badness of displeasure. That’s what the objective facts are. And now we need to look into which attitudes represent things as distinctively, morally charged. And in which attitudes we apply moral concepts, because we’re looking for the objectivity of morality here, not the objectivity of painfulness.
The objectivity of painfulness I’ve given up on. If the stove is objectively painful. I’m not going to say that. Some alien will touch the stove and then feel great pleasure. Okay. Yeah. That’s, what’s going to happen there. There’s no objectivity to be found about what is painful as far as what external thing is pain causing, where you can find it is something like “pain is bad”.
It’s pain as a kind of displeasure. It’s not pain unless there’s some displeasure in it. And at least defined that way, pain is bad. Displeasure is the bad thing. And our judgments about displeasure well those aren’t really pain feelings about it.
Displeasure is bad, I think is not really a content of a tactile judgment about the stove, which is really what I’m having when I’m getting the “stove bad”, “stove painful”, a stove causes pain, all those things. That’s what I’m getting there with stove judgements, but really what I want here is pain judgments. And where am I making those? That’s really to be found in my hope for a certain future where there is less and my horror of a future where there is more.
Universality
Gus: Okay. We have a metaethics that can ground hedonistic utilitarianism. So, what are the best arguments against this view?
Neil: Ah, let’s see. Great. Let me think about this. It’s been a while since I’ve been asked to do this, because I’ve been fighting for the positive view for awhile and I haven’t really brought up.
Gus: Yeah. Yeah.
Neil: I’ll tell you where the most novel stuff is in this view. And that’s a place where I don’t know what the counterarguments are, but there are going to be some, because I’ve done something really new here in setting up this view.
And I invite you to come up with the best counterargument to it because that’s where it’s going to be easiest to attack. This is in how I’m seeing the accuracy conditions of our moral feelings. So what makes hope accurate? What makes horror accurate? In answering those questions, we discover what is good and what is bad.
I think what makes horror accurate is the horrible, the really bad and what makes hope accurate is the thing to hope for, the good. So yeah, that’s what we’re trying to figure out here. So how does accuracy work here? My defense of hedonic utilitarianism that I’ve given you so far is dependent on a certain kind of claim about the accuracy conditions of hope and horror, and really of pleasure and displeasure, the fundamental things that give them their moral nature.
And what I want to say there is that what makes them accurate is qualitative identity with the thing they represent. It’s just matching the thing they represent. As, one golf ball matches another golf ball.
Can a bit of pleasure match another bit of pleasure? If they’re just experiences, if we see them that way, experientially, they can be the same. If it’s just a good feeling, matching a good feeling. Yeah. You can get identity between those things at the level of experience. Between those, and maybe you don’t get perfect identity, but you get near matches that are enough for a high degree of accuracy.
So accuracy works that way. It’s a gradable notion you can be more or less accurate. Perfect accuracy is rare, but enough accuracy is usually enough.
Yeah. And that’s still, if you’re trying to accurately represent what one golf ball looks like, can you give me another, you’ve given me a great representation of how the other one looks. So ,accuracy we’ll find it in all kinds of places in more or less ways. You just don’t want the thing where you’re feeling the “down” thing about the “up” feeling, being horrified by children, enjoying ice cream as Leon Kass was, that guy. That’s a vice right there, if you’re horrified by that.
Those kinds of matches and mismatches, are what I’m taking as accuracy. And the idea here is that, yeah, this kind of identity, same thing on both sides, is what accuracy is.
Why do I think this? This is a really novel claim, really, as far as I can tell. A guy named Colin Marshall has told me that a Schopenhauer had an interesting view along these lines. Adam Smith’s theory of empathy matches these things, but it doesn’t quite do what I’m doing with moral judgment. There’s some predecessors, but there’s hardly anything that brings this in ethics. Maybe there’s some Buddhist or Mohist or [uninintelligeble] philosopher in India or somebody in Cyrene back in 300 BC or something who had this, but I’ve never seen it. That this match is, qualitative identity, is really what it is to be for accuracy.
Why do I have this? Here’s the neat thing about it. It gives you universality, all metaphysically possible minds that have a pleasure response to something and that demand of it, universality and objectivity. There has to be something real in the world that would make my judgment accurate and make the judgment of every metaphysically possible being who feels as I do accurate. All minds that feel as I do must be accurate, all metaphysically possible minds that feel as I do must be accurate here.
If we’re talking about an experience of pleasure and the experience is qualitatively identical, with pleasure in that way. Pleasure to pleasure match. Everyone who feels the pleasure. It’s the same thing as pleasure. And any metaphysically possible mind will be accurate to the pleasure of the world, because it’s just the same thing.
So you can get universality and objectivity out of this. And that’s the argument for, it has to be identity that constructs, the match. Only identity, as far as I know, that’s the that’s the simplest way to get universal accuracy. You can of course be a non-naturalist and build up a whole bunch of complex relations that apply to all possible minds.
That don’t really deal with the content of the internal sensation. Just say, “hey, there’s a non-natural fact” that, this complicated thing where there is a whole bunch of complex stuff going on. That’s what makes our feelings accurate. And non-naturalism can be reconstructed there. You might get a theory that better matches your intuitions and say, yeah, that’s right for all metaphysically possible minds, you can build that up.
But there’s just nothing empirically to suggest that’s really how it is unless you take our moral intuitions really seriously. And again, I’m the kind of person who thinks human beings are thinking genocide is right, pretty often. I am not taking these intuitions as something to build a giant non-naturalist, metaphysical structure on top of when we are, throughout so much of our history, thinking that killing each other in horrible ways is a great thing to do that you should be proud of, that you ought to do because the heroic guy who leads your people commanded or something like that, or God says. What are people even doing?
A God that commanded that would be an evil God, it’s just, it’s a disaster. So I am not taking these seriously enough to build a giant metaphysics on them. Mathematics. I see why people have a case. There’s a real case there that those things are corresponding to something pretty amazing because there’s just a lot more agreement.
There’s a guy named Justin Clarke Doane, in metaethics, who denies this, who thinks its a lot more similar, but I don’t think he’s taken a serious empirical look at the frequency of pro-genocide views in human history. Really? You need to do this empirically and look at how often humans are getting it wrong and they are messing it up.
They are getting pro-genocide views. They’re just, it’s disastrous. So yeah, don’t build a giant metaphysical picture on that. Do the simple thing. Do accuracy.
How many concepts?
Gus: I agree that we cannot trust our intuitions. And maybe as a, let’s say critique of your view, I’ll give you my own take on the metaethics of hedonistic utilitarianism. And I should say that this is not original to me. This is a view developed by Sharon Hewitt Rawlette that I have extended, let’s say. So, in what sense is pain bad? Badness, the concept of intrinsic badness or the concept of intrinsic goodness is learned by experience. So when we feel pain, this is the content of the concept of intrinsic badness.
And the question is then so how can we say that pain is badness? When two concepts point at the same thing, then we know that they, that this is the identity-relation we’re looking for. And so I see all of the, all other moral concepts as I see a secondary. So whether an action is right, whether a person is virtuous, whether an institution is just, must be built up from this very basic fact that pain is badness and pleasure his goodness.
And so maybe we are about as close as we can get to agreeing about these things, but maybe the biggest disagreement sometimes arise from people who are very much in agreement. And I would urge you to take an even more, even simpler, even more kind of flat-footed view and just have this one central fact that say valence is identical to intrinsic value.
Neil: I accept that view. That valence is identical to intrinsic value. Yes. Now the question here to go beyond that is, what is the content of moral judgment, because I want to end up where Sharon Hewitt Rawlette ends up. Sharon, if you’re watching. Hi, you’re awesome. I like your work. You have a big role in the paper where I’m talking about this. I need to send you that paper, Gus, I need to send her this paper.
We emailed 10 years ago when we were realizing that we had similar views and it was just great. And I need to show her where I’ve ended up because I’ve seen what she has done. Yeah. It’s nice. Yeah, but anyway to get to that, this is actually where I make some changes on top of the of Sharon’s framework, because I think the story about the moral concepts needs to be developed a little bit further.
It’s not so much that I’m well at a certain level. I don’t think I’m an analytic naturalist of the kind she is. But I have a way within synthetic naturalism to get the “Gus and Sharon”- view to come out. I think. And the way you were talking about it made it seem more synthetic that there were two concepts pointing at the same thing which is the way I like it.
I guess if the two concepts are related enough, it could be analytic that you can analyze one into the other, and then they pointed the same thing, but I want to do it in a way that at least leaves open the possibility of a synthetic naturalism because I think there’s Open Question Argument problems.
If you proceed from what Sharon is doing and what seems like the most straightforward way. So I’m trying to build a nice way to get around that and solve those problems for her really, because it ends up going in a way that she generally suggests. So what’s going on here is, and let me ask you Gus because you’ve presented it as your own view, what would you say when I’m judging:
″ this action is wrong”. Do you have a deeper way of analyzing that wrong- judgment? So I believe this action is wrong. Is there anything so some people would say wrong means there is an objective or categorical reason not to do the action. I say it’s something, it’s an action that you should feel displeased about and feel a feeling like guilt, if it’s your action or anger, if it’s someone else’s.
But anyway, it’s an action to have an unpleasant feeling about. Do you have a story about what wrong is analyzed as?
Gus: Yeah, I would analyze it as, this action will not maximize the balance of pleasure over pain in the rest of the lifetime of the universe. So, that is an incredibly difficult thing to know whether an action is wrong or right. But I see it as an enormous empirical investigation. So when you say “murder is wrong”, this means that it will cause much more pain than pleasure. And this is an example of how I believe that all moral concepts can be built up from the basics of pain’s badness and pleasure’s goodness.
Neil: Okay. Yes. Yeah. I agree with you on what moral terms refer to in the end. They, in the end refer to pleasure and displeasure in the end and arrangements of pleasure and displeasure in reality the action related ones do refer to arrangements of pleasure and displeasure in relation to an action where your action caused lots of pain.
So that’s when your action is wrong. So I think we are in agreement as far as I can tell on the reference of these terms. But the question I wanted to ask you was at the level between the reference, the sense or the concept or the meaning in between. For a lot of history, people understood at some level what water meant, but they didn’t know it was was H2O.
This is very useful as an example for a naturalistic moral realists like myself, because if you understand what’s going on that way, you can understand why alchemists couldn’t just consult the dictionary to get out of their errors. They had to do some actual experiments and discover something to do this properly.
So if we make it something that is at the level of sense or meaning or concepts, analyzable, where you can figure out the truth, there arises this question of why didn’t we figure out the moral truth earlier? Why are so many people mistaken? If you can just analyze the concept and figure out the normative ethical truths. And this was the problem, that G.E. Moore posed to all the utilitarians before him Bentham and Mill and so on.
And we’ve been doing 120 years of metaethics since then trying to deal with this. My way of dealing with it is that what the concept wrong is to be analyzed as is action, to be displeased about. And then even the person who rejects hedonic utilitarianism, isn’t making a conceptual mistake. They aren’t doing something where they could just look it up in the dictionary or analyze their concept better and find the truth.
They’re doing something well, they’re in a situation like the alchemist. There’s something big that has to happen. And it’s not just going to be reflecting on your concepts. It’s not just going to be consulting the dictionary to find the answer. Now there’s people like Frank Jackson who say really analyzing your concepts is very hard.
So it could be that we have to do some really hard conceptual analysis. And maybe there’s a way that some fan of Jackson can flip my view over cleverly into that. But I don’t see it yet. As I see it, we aren’t going to get there on conceptual analysis alone. And so you can’t put hedonic utilitarianism into the content of the concept.
If you do, you’re going to just give an implausible account of this content of the concepts. It’s going to be, why didn’t people figure this out before, if it’s all there like “a sister is a female sibling” or something like that or, ” a square is a four-sided figure where all the sides are equal in length” that, why couldn’t people figure it out.
And this is the problem that pushed utilitarianism into abandon for the first half of the 20th century, as far as I can tell. It was still around. People still liked the view, but they went non-cognitivist or did something funny that was just often a different direction.
And then in the second half, people came up with other solutions to these problems and just ran off to other views. And now utilitarianism is basically nowhere except for a bunch of effective altruism, bring it back. And that’s the practical side where it is. I’m trying to give you a theory that will solve Moore’s problems and get, get us all the way back.
And the way to do that is understand the wrong misjudgment. Not as something that entails in the conceptual side, but it’s just okay, there’s an action to feel displeased about. And that’s compatible with “you should be displeased about lies because they are lies and nothing more”.
And a non-naturalist can still hold that on my view. That’s still a conceptual possibility. “You know why you should be displeased about lies?”. Because there’s a non-natural property of wrongness attached to the lie. That’s a conceptually possible view in the way that the alchemists view, it’s something new, there’s no contradiction in it.
The alchemist who thinks there’s just earth, water, fire, and air, and this stuff is a simple substance. That alchemist, is not falling into contradiction. It’s just an empirical mistake. And that’s how I want to see all the other normative ethicists who are just wrong, but not contradictory.
They’re just making that kind of mistake. What you’ve got to do is figure out what to do with that “this is the thing to be displeased about”- judgment. And that requires a little bit of empirical information now.
Gus: This is extremely interesting and I’m not, I don’t have a firm judgment about who’s right here. If it’s so simple that it’s just relating two concepts, why didn’t Plato figure this out 2000 years ago? Yeah. I believe that we are very easily confused about the associations we make between pleasurable experiences and the value we can project onto objects or institutions or people.
So for example, if I have a religious experience I might project some intrinsic goodness into the religious figures that I’m worshipping, whereas it is actually my pleasure that’s good.
So I see this as a series of very easy to make mistakes about projecting value out to the world, out into the world, as opposed to finding it in your experience, but maybe this talk about Moore’s Open Question Argument and analytic versus synthetic naturalism. It’s getting very nerdy and very philosophical, which is great.
Neil: Yeah. Let me say the way you’re doing it is the way Sharon does it. And I think there’s some possibility that somebody could show me that really my way is a path to your way. And if that can be shown, there’s a way to do it. Where you say really, in, in what I’ve told you, there’s a way to build it all the way out where all the arguments I’ve given for this being hedonism synthetically.
A lot of that stuff is actually analytic rather than synthetic, but it was just so weird that nobody, got all the way down to it. Maybe you could do that. And that’s the way to flip me over, into being an analytic naturalist like you and Sharon Hewitt Rawlette.
Reductionism and physicalism
Gus: We should talk about reductionism, because what we both want to do is to reduce ethical value to something that’s discovered empirically.
And yeah, I have this call it, mistaken youthful ambition, that morality could be a science. And I think you’re agreeing. And so do you agree that if we are to fully naturalize ethics, we must be physicalists about our experiences. So we must accept that in the end conscious experiences are physical states or brain states.
Neil: I’m neutral right now on the physicalism non-physicalism question. One of the reasons why is, it’s just really hard for me to understand exactly what physical means. If you define it in terms of contemporary physics, I’m pretty doubtful that the stuff of contemporary physics gives you a full reductive treatment of consciousness.
If somebody has a great reduction, I know that there’s a integrated information theorists and other types of people who are offering reductions of various kinds, but from what I’ve seen, I would need to know a lot more to weigh into those, but the smart people who have tried, who have read the stuff have left me feeling pessimistic about whether I’d find it there.
So my guess right now is with current science all the way up and down, we just don’t have the stuff to build the reduction. There is an explanatory gap still remaining. Now I’m not a pessimist about closing that explanatory gap at some point. And David Papineau, a philosopher who on one of the PhilPapers surveys, I just matched totally with David Papineau, who’s a physicalist about the mind, he has an argument that really physicalism does really well. “You should expect some kind of physicalism to win in the end”. And I’m like, okay, David, I can go that way.
And I’ll just go provisionally with you that way. What I am confident about that is very close to physicalism is that consciousness is within space time. This has been proven. Bertrand Russell proved this in the 1920s.
It’s a consequence of special relativity, basically, according to special relativity, if something is in time, it has to be in space because of the unified nature of space-time. And qualia are in time. Consciousness unfolds in time. You have a conscious experience at one time and then it goes away. And then there’s another one.
Now I have an argument. I’m preparing this. It’s writing this up as a bit hard, but I’ve managed to make Russell’s update or Russell’s conclusion from special relativity a bit more precise. And I think I can figure out where consciousness is. It has a spatial location, well mine is right here.
And yours is in your head. Because if you do the Einstein train things, if you’ve seen these with a special relativity, the way he illustrates them. There’s a argument I’m developing that if you basically do the Einstein train thing and have the train run between two people having qualia.
The timing situation, you can see how it works and you would get completely bonkers results that special relativity says you won’t get if consciousness is anywhere else, but in the head. You would get, in some frames of reference, if you’re moving fast enough towards somebody their conscious experience could happen before their brain state happens. If it’s a different location, it’s just a bizarre thing that special relativity says that is not going to happen. And maybe you can make it on like Leibniz’ Occasionalism or something really bizarre like that.
Even then it would just be, it would just be strange. Consciousness is here within space time. We should invoke it in our scientific theories without fear. It’s just another thing for which we don’t have a full reduction yet, but there’s plenty of those because reducing things is hard, so let’s just accept it’s here. Maybe worst case scenario, we have to invoke some new fundamental forces to deal with it. But if the thing is in space-time and is in a convenient location in space-time near what it causes, it’s not going to cause the disruptions to science that people worry about. Maybe we need new fundamental forces, one or two of them to deal with this.
So it doesn’t disrupt the causal order. It’s just pure data at some level, some data showed up that was unusually ontologically, robust. But you don’t use Ockham’s razor on data. That’s monstrous, don’t cut up the data, don’t simplify the data. Oh, you know what you can do then best thing to do is to cut away all the data. Then you can be so simple in your ontology. You’ll have nothing, but of course we don’t do that. We don’t accept “there is nothing”. That’s what you would do if you Ockham’s razor the data.
If consciousness is at some level appearing in our theories in some of them as data, which it will. It’s not the only data, but it is a thing that appears in some theories as data. In my own psychology. It’s basically like I had this experience. Why did this happen? That’s a question I could ask. It’s showing up as data there.
Don’t Ockham’s razor it, keep it in. Accept that it’s there and now build it into your theory of the world. Whether it’s reduceable to what we have on the table right now, or not, we don’t know, but it’s in space time, it’s causally structured, like things that play nicely. Just accept that consciousness is there. That qualia are there.
They’re in space time, they’re wired up to everything else, nicely. They don’t necessarily cause anything. So don’t worry about them messing up other sciences. The other sciences can proceed just fine. You just have all this other stuff that happens to be there. And really, it was only the behaviorists who started raising a giant human cry about this stuff.
And I’ve actually looked into the history of this, the behaviorists, as far as I can tell Karl Lashley is doing the devil’s work here. He is actually harming science in a really terrible way. He says in the early 1920s, there’s a debate between the psychologists. There’s this guy Fernberger, this other psychologist who says, “you know what we need to do, we need to split up psychology” .
We’ll have you behaviorists you get your thing, you get your own science of causing behavior. And we’re also going to have the science of consciousness where we figure out, like, why are conscious experiences happening? Let’s figure out what their physical structure is. And it’s okay if the the’re epiphenomenalist for that science.
And that’s what keeps peace between the two sciences. You behaviorists you figure out what causes behavior and maybe all this stuff is phenomenal and you never have to deal with it again, but we’ll have some consciousness people on the side dealing with that.
Lashley comes in and he says, we are not even allowing a science of consciousness. Nobody has the right to collect data on this. And here’s the deep reason why: “we behaviorists wanted to be physicalists”. And there could be data there that disrupts physicalism. Don’t collect it. And this is just terrible because the kind of physicalism I like is the valiant heroic physicalism that finds all the data and explains it, and takes the risk that maybe we can’t explain it, with the stuff we have and maybe we need more stuff, go out there and be a heroic physicalist and try your best to explain the difficult things.
And maybe it turns out some of them are illusions. Okay, that’ll come up. As we look at the data and we find out it’s badly collected or something, go and find stuff and explain it, don’t run away. And then say, we’re not going to collect this. You just lose a science of consciousness that you could have had that Fernburger wanted to have.
And I think Fernburger was right in this debate. There’s a science that’s missing here, a science of consciousness. And we need to go back to the people like Russel and Einstein, who would tell us the qualia you have to be in space-time. And then we can rebuild the science of consciousness where, it’s been just this missing patch in our set of sciences for a century now because of Lashley’s scientific crime.
Gus: I think that we have to reduce rather than eliminate consciousness. If we are to move forward. And I think there’s a, there could be a lot of interesting discoveries about consciousness. Because it’s so close to what we’re doing, and because it’s so valuable for us, we are missing a lot if we’re not taking consciousness seriously, as an object to be investigated.
Neil: Absolutely. And I think hedonic utilitarianism in particular has suffered from this, because if the value stuff of hedonic utilitarianism becomes ontologically questionable because it’s within consciousness and it’s we don’t know what that stuff is.
Nobody’s really sure how to deal with it. The sciences can’t touch it. This stuff exists in an ontological gray zone where respectable people aren’t willing to engage with it. And only unrespectable people are allowed to engage with it. People who are scientific renegades and philosophers who don’t have to always obey scientific rules.
So yeah, that’s just, what’s over there and you just get bad theories then, but if Fernburger had won I think we would have some great stuff going on right now. Consciousness wouldn’t be an area full of philosophers and people who don’t really like science that much. And and there are scientists, people do research on the neural correlates of consciousness, but that is so much smaller than I think it should be. And then it would be if Fernburger, rather than Lashley had won the debate a hundred years ago. But I don’t really know why Lashley won. It’s a mystery to me. Fernburger totally seems to be right.
And there were people on his side who were pushing for that, but behaviorism won and the damage he did with Ockham’s razor last to the present day.
From philosophy to science
Gus: What I hear glimpses of in what you’re saying is this view of philosophy as, a you could call it a playground before something graduates to, to become a science.
And so we’re figuring out what the basics of a field, figuring out the basics of a field, figuring out what we’re even investigating. And when something has been investigated in this way then it can move on and we can make a science out of it. Is that how you see things?
Neil: Very much, so very much. Your playground metaphor is one I’ll have to consider. The metaphor I’ve been thinking of is the mother of the sciences, because, that’s where all this came from. If you look back at what Newton calls his book, Mathematical Principles of Natural Philosophy. If you look what John Dalton calls the book, where he comes up with atomic theory, I just love this title, A New System of Chemical Philosophy.
I imagine if it were still called the chemical philosophy department. My dad got his PhD in chemistry. He was a synthetic organic chemist. Chemical philosophy. Can you imagine how many grants we philosophers would get if they were still calling it chemical philosophy, they’d be giving us grants and say, “make a new chemistry for us, please, can you make another one of those? Because that turned out really well”.
From what I understand, this is something that went wrong in the 1830s. Actually the 1830s, from what I’ve heard is the time when science gets something like it’s modern meaning, that excludes things like philosophy.
And that includes something like what we think of the sciences of today. Before that to be a scientist, you’d be a natural philosopher. And this was one of the areas, within which sciences were growing as the playground as you have it, some were playing around there and I guess the ones that made it off the playground became sciences and then no one remembers that they were back on the playground before, but you see it in the titles of the books, A New System of Chemical Philosophy, Mathematical Principles of Natural Philosophy.
And you will be fine. The foundations of two of our most successful scientific disciplines, probably the two most significant books in them are the one with atomic theory and the one Newton wrote.. So really we philosophers need to reclaim that, realize that this is what we do that contributes in a giant way to progress of human thought and then be like, yeah, we’re doing Newton and Dalton’s stuff.
Give us some grants.
Morality as a science
Gus: There is this project that I like of trying to make morality into a science. And this seems to have falling into disregard in philosophy. It seems to be regarded as naive or simplistic in a way, or maybe it’s a hope that can never be realized because we have discovered something about morality that makes it so it can never be a science.
I’ll try to explain what I mean here. When I say morality as a science, I mean that we’re trying to use all the disciplines we know of. We’re trying to use physics, biology, the brain sciences, the social sciences, economics, everything we know to inform this giant project of maximizing the good in the world.
I would like morality, a science of morality to be a unifying justification for what we’re doing in all of the sciences, what projects we’re researching. It’s a prioritization scheme for what we should investigate. Do you feel that there are great arguments or is it, is it more of a kind of a zeitgest that is against morality as a science currently?
Neil: Yeah. I’ll go with the temporary zeitgeist account of why people are opposed to this. And I think there is one kind of good reason why people are opposed to this, which is that a lot of people have done it badly. Attempts to turn morality into a science generally are a lot of them just, yeah, we’re not good. But yeah, there’ve been just a lot of things that didn’t work.
Really a lot of them did not take metaethics seriously. And I think that solving certain metaethical puzzles is really what you have to do. You don’t have to solve all the puzzles because some of the puzzles you have to show are misguided in some ways and anything that requires a categorical reasons for action or universal reasons for action. I want to say this isn’t a puzzle, we should even go in. Rather, what we’re looking for here is accuracy conditions for feeling or how we should feel, how to feel. That’s the correct way to understand this. I’ll try to do that at the conceptual level first. And then once we’ve gotten into that, I’m like, okay, I can give you a broadly scientific story about how these work once we’re looking for those.
So I think there are two ideas that you had there that I agree with both of, one of them is using all the sciences to figure out morality and the other is using morality, once we’ve figured it out, to prioritize what we do in the sciences.
So on the first one using all the sciences to figure out morality, that’s really what my treatment of philosophy as the mother of the all amounts to. It’s basically philosophy saying, “okay, kids, all of you help the new baby”. So right, of course this was going to happen. And as you figured out more things, you have more little helpers to help out the new babies.
I wish motherhood took the shape more often. I, as I understand it, kids more often just try to cause trouble for the new baby. Set that aside. We have very nice kids, very good sciences, and we can trust them to give us good results.
Psychology, we’ve had some problems with the replication crisis. That’s true, but Einstein, I trust Einstein to tell me what’s the shape of the world. The shape of the universe is. He figured that out. And really we have managed to verify what he told us in some pretty amazing ways. Cell phone GPS off of the theory of relativity, because, the way that works requires his, his theory.
We just have awesome theories and awesome results coming out of these theories. Let’s trust them. Let’s trust those theories to tell us this universe, we’re in an Einsteinian spacetime. That’s what this is. And one of the things you get when you get that, that really is, I think, metaethically important.
There’s some powerful anti-intuition stuff that Einstein got from Hume and that should inspire us in how we do metaethics today because Einstein himself, I think in a letter to Moritz Schlick of the Vienna Circle, I think it’s just Schlick some years after, I think, 10 years after, was it, after discovering relativity in 1905 Einstein writes...
It was empiricism about the way concepts are structured. It’s the way we get our concepts is from experience. So we get our concept of time from our experience of time, get our concept of space from our experience of space. And if that’s what it is then the concept of time could work in something like the way Einstein says it does where we get violations of the way that Kant says the things could work.
You could get such things as time travel. It’s, a, It’s a conceptual possibility in certain systems, whereas in the Kantian system with a time proceeds in one direction, a space has to be three-dimensional well, that kind of structure with a Euclidean geometry to space that’s supposed to be within at least the pure, I don’t know if it’s conceptual, but then you get the pure intuition. So it can be known apriori that space is Euclidean, for example. That can be known within the Kantian system. Within Humes system, it’s just hard to show how you could know that a priori. And Einstein is reading Hume figuring out, okay space being Euclidean is not a priori in this system.
I’m going ways in which space isn’t Euclidean. So that thing is something that Einstein, as far as we can tell a finds in Hume, just concepts of space and time that are not the Kantian ones. What this shows is that you can be very confident in content, intuitive structures to tell you what important things like space and time, and right and wrong are, and kind is going to tell you, this is fundamentally related to action.
We’re looking for a universal laws and the way the categorical imperative works, it’s supposed to be running on universal maxims that have a fundamentally action-directed structure. Could you will these things together? It’s very important to the structure of Kants theory that this is about action and it’s very important to the contemporary Kantians to Christine Korsgaard that this is coming up in reflection about what to do. Practical reason is where morality’s natural home is.
What I want to do here is a very Einstein-like move a very Hume driven move. What I’m looking at is the structure of our moral concepts. How are they actually built? And what Einstein had to go on here was some empirical data drove him, measurements of the speed of light, and those pushed him to seeing time and space in a very different way than Kant did.
The gay conversion therapy example could be my empirical data. It shows us that the practical output is just not showing up. So let’s move this to a more representational, perceptual way of looking at our feelings. Once we look at the concepts that way and go away from the way Kant wanted us to treat morality in terms fundamentally of action.
Once we look at this in terms of feeling, once we look at it in the more copy principle, empiricism way that Hume understood our concepts. Once we build up that way, I’m just doing Humean copy principle, empiricism on feelings. A feeling comes in the concept you build has some essential connection to that feeling that came in.
It’s that feeling, and you add a should onto it as well. There’s something normative there. I want to account for that. Okay. Feeling and should, that’s how you get a moral concept. And you can strip down feeling even further to the pleasure and displeasure that’s in it.
So I’m building up the concept Humes way. That’s how Einstein discovered the shape of the universe. Let’s do this again. Let’s do it again and see what we discover. And maybe there’s Einstein-size stuff at the end of this. So that’s what I want to do. So now here we are using the sciences, just like you’re saying Gus to figure out morality, and then once we figure out, okay, this is what the good is.
Then we can investigate the sciences that are the most helpful in pursuing the good maybe we can build something like, we have the good, now let’s do political science with the good in hand. We know pleasure is the good, okay. Let’s assume this confidently in political science and do the all-out utilitarian political science that Bentham would’ve dreamed of.
It’s there for us now.
Human motivation
Gus: Great. In general, you’re very inspired by Hume in your account of human motivation. So could you briefly explain the Humean theory of motivation for us?
Neil: Yeah. It’s something that I was going into a little bit earlier with the way that our moral judgment works, right?
So the idea of the human theory of motivation is, desire drives everything we do in terms of choosing the goals of action. So it motivates all of our actions. Whenever you act, you have a desire for some outcome, some end, and you have a belief that by taking the action. You can produce the outcome or the end and often taking the action has a bunch of steps between the action of the end.
And you have some belief about what those steps are. By pouring this water into this glass, I can quench my thirst and there’s a couple other steps involving doing things like this, but the end is quenching my thirst drinking in the water, something like that. And I do some things as means to that end.
Oh, what the Humean theory is trying to rule out is something where I have a belief about what’s good or right. And that belief plays the kind of role I assigned to desire in that explanation. The belief about what’s good or right says this outcome is good or this action is right. And that either drives actions that I believe will produce that outcome or makes me do that action, if the action itself is right.
So Hume is arguing against views where belief, or as he put it “reason” can motivate us. And the way I defend the Humean theory, I don’t just think it’s about “okay, we can’t have reason immediately driving reaction”. I think also you need to put on this, and this is where I disagree with Michael Smith who calls himself a Humean
I think the belief cannot generate a desire. Through reasoning from beliefs alone, you can’t have a bunch of beliefs and reason from your beliefs and end up with a desire. I think, you can imagine creatures that can do that. I don’t think that the psychology, Michael Smith suggests were beliefs about our reason to do something are moral beliefs.
Generating desires is like impossible as a psychology. You can imagine that they could be really psychologically powerful creatures. They would think, ah, I ought to work harder. I have a reason to work harder and they’d form a desire to work harder and they would get more work done than me.
But really we aren’t like that. And the gay conversion therapy case is really what I think shows it.
Gus: Imagine I go to a philosophy seminar with with Peter Singer and he convinces me through reason that factory farming is morally abhorrent. Isn’t that a case in which if I then stop eating meat, I am directly motivated by my recent beliefs as opposed to my desires.
Neil: We have to get into your head as you’ve listened to Peter Singer and figure out what exactly happened there, and really this needs to be treated with a great deal of psychological depth. We need to really explore how this is going on, how moral persuasion happens.
Now, the cases of moral persuasion that I was seeing anti-Humeans offer in the philosophical literature. When I looked at these, they were just leaving details in to make them true to life. Then when I looked closely at those details, it’s like, why is that detail there? If it’s all belief. So I’ll give you an example. There is a case that Steven Darwall has where this woman watches a film about workers being treated badly in a cotton mill or something in the Southern United States.
And then that experience gets her to become an activist trying to work for better conditions for the workers. Okay. If you take that process, so she sees the film, becomes an activist and does that, it seems to me like your Peter Singer case. She gets information and acts on the basis of information. Sounds pretty reason-based.
But one of the details that Darwall leaves in is that this woman, Roberta, feels an experience of shock and horror as she sees how the workers are being treated. And now there’s this question. Why does she have that emotional response? Why are shock and horror showing up at a time when we would think she hasn’t yet formed the decision to go there.
Now you can put the decision oddly early and say, oh, she had really decided early. Someone can be, I think, a natural way to see the case, and the way Darwall presents it is, people are just shocked and horrified by this and they feel that first.
And then they decide what to do. And even after that, if they’re shocked and horrified, and then they decide something must be done, you can to be shocked even before you decide something must be done. Even before you draw any practical inferences, relating to action, you can just watch it. It can be almost the way you’d watch a fictional, like film where something really bad happens to somebody and you feel bad for that person, but there’s nothing you can do because obviously it’s a fiction.
You can watch the documentary that way, feel that, and then later on think, can I do anything for those people? Oh, there might be something, so that’s a way, that’s the way Darwall presents this and now we’re trying to explain those feelings. A thing about belief is that belief on its own does not generate horror.
As far as I can tell, to be horrified, you need something like a desire first for the thing not to happen. So if, when Peter Singer tells you about what’s bad, in factory farms. If you have an unpleasant experience, as you think about what’s happening to the animals, which I think is how it usually is for people.
And that’s why our EA veggie vegan friends, try very often to give you cuddly animal pictures especially when they’re talking to ordinary people who are not philosophers. The way it ordinarily works is you give people that, now maybe if people are primed up in a very complex way that philosophers often get, or if they’re the kinds of unusual people who become philosophers or who become EAs.
I know that within our EA community, we have some people who are just a little bit different from the rest of the folks, and maybe they have something special going on where you have to talk to them in a special way. I don’t know. But with ordinary people, you give them cuddly animal pictures.
To get them to donate to global poverty causes, you give them the kids in Africa and India, and you make them cute. As far as I can tell, this is not a belief driven process. This is a, looking a lot more like processes that manipulate and enlist the help of desires. So you had some desires first. And what usually goes on in this persuasion is they’re reaching into your desires and grabbing something.
Now, I don’t think this has to necessarily undermine the rationality of the persuasion. It may be that the images are actually undoing something irrational, you were stuck in beforehand. It was just that you were, what was much more vivid to you was the luxury goods you could buy and these images of the animals or the kids, raised vividness of the other things so that now you can make a decision from equal vividness, which is better.
So this is not to say that something irrational is going on there. It psychologically might be engaging with things that are on the motivation /desire side in a way that’s much more Humean than a way where I get convinced first and the beliefs drive my action from then on.
There’s a guy named Josh May, who I think is the best anti-Humean out there. He has a book called Regard for Reason in the Moral Mind. I am not happy with a lot of anti-Humeans who just do a slipshod attempt at best to engage with the empirical work, but he really tries. And one thing that I think we’ve come to we’ve come to a sort of agreement on, or quasi-agreement, as far as I can tell, obviously he has his position, I have mine. But as far as I see it I want to go along with him on the idea that rational persuasion is much more frequent than say Jonathan Haidt or Josh Green or people like that were suggesting. Yeah, we can be rationally persuaded and it happens, of moral things. This really does happen.
However, that happens. I want to say with an underlying strongly Humean psychology where desire drives all action. And when we act morally, that’s going to be at some deep level desire-driven too, but that’s okay. That doesn’t undermine the rationality of things in general, just understand rationality, more the way Hume would. We’re going to be good, rational Humeans about this and form our judgments the way Einstein did. Simplest explanation of the data.
That’s the way you do it. Take in the experience, build the simplest explanation. Einstein says that’s the supreme goal of all theory. And that is the rational process. That scientific rational process is the one that leads us to the moral truth.
Alienation from our values
Gus: There’s this problem of feeling alienated from our own values. There is a problem of this feeling of alienation from our own values, especially among utilitarians or effective altruists. Especially if you have this system that places high demands on you. Our values can feel distant. We can feel like we’re being almost oppressed by what we have to do, what we should do. And do you have any advice for dealing with this?
Neil: It may be too late for many who feel that conflict because that problem, I do think just is something that happens, that comes up from within your desires at a certain level.
Now, suppose you were an utterly pure of heart utilitarian, you believe the theory and all your desires and emotions are in line with the theory. I don’t think we have many people like that. Psychologically, it’s hard for a human being to be that way. I think we can make ourselves slowly more and more that way.
And it’s really good for some people to do that. Maybe not everybody who knows it, everybody should, but I want some ninjas. I want some utterly single-minded, pure utilitarians who will just go around the world and end all the existential risks. We have some of those people and they will save us from monsters.
So yeah, you can try to build yourself up that way. It’s hard, but people can try to do it over time. People who come to the theory young have an easier time of this because they can over time set themselves up psychologically and take courses in life where disruptions don’t happen.
I’ve been lucky that way. Coming into philosophy and as a utilitarian, this is what I did. I was an undergraduate. I was 18 years old, a freshman at Harvard. And I was, it occurred to me that this phenomenal introspection way of figuring out that pleasure was good. That was, yeah. That has to be the way you do it.
And it was just like the same way that a young chess player will be. Like, “I have checkmate here. I know there’s mate in maybe seven or something like that, but I have it”. And you just throw all the pieces at it and everyone thinks that’s crazy. Why are you throwing all your pieces away? Now I’m 41 and I’ve thrown a lot of pieces at this.
And I’m like, now it’s not mate in seven. It’s mate in three. And I think I’ve got this. It’s just coming closer and closer, that’s how this goes. And then if you do it that way, you have time to build up your life in such a way that you don’t get all the contrary things that would pull you away from the theory.
If you have children, you’re going to have this kind of conflict between caring for one or a few, your children. And then this theory that you accept that tells you to do something else. And that conflict is just really hard to resolve in your own life because you love your children.
Utilitarians who have kids, they really do have to figure out how they’re going to do this. And I wish them the best. My, my situation is easier. I don’t have kids. I can be pretty single-minded, pretty focused. And I’ve had plenty of time to try to build myself up as a person in a way that made me even more single-minded and focused, and yeah, you can do that. That’s really at bottom, if your motivations are separate, you will be alienated. You’ll have a lot of motivations and you’ll think about one motivation of yours that goes in a different direction. You’ll have conflict. And that conflict, I think just is a kind of alienation, or an alienation emerges from it that I can’t tell you how to get away from, if you feel it.
Christine Korsgaard in Sources of Normativity has this idea that a rational deliberation could unify you. If your desires are not unified, there’s a way that you could rationally be like, I have settled on this. And then everything falls back together. And I don’t think that actually works.
I don’t think that human beings can do that. They can do it in some situations where they find a solution and they bring the things together. In some real resolution between them. They find a way to pursue everything just about enough. And they’re happy. But there’s times when there’s nothing rational deliberation can do, and you’re going to lose a part of yourself and it’s always going to be screaming at you for not, following it, if you have, children or something, and it’s a choice between the utilitarian path and you really believe utilitarianism is the right moral theory and your children.
If the more severe that becomes the more bad it could be and I don’t think there’s necessarily a way out for people who, if that really becomes a dilemma.
Children as a utilitarian
Gus: This is a tricky issue, the issue of having children as a utilitarian. I would worry about what I consider some of the most ethical people on earth choosing not to have children. And then maybe they’re not getting them. They’re not furthering these values or, I am skeptical about the case against having children. I must say.
Neil: We should talk about that because I have the “it’s just fine for utilitarians to not procreate kind of thing”. Because I think our most effective way to make people better is not through the incredibly expensive and difficult process of raising them ourselves. The grow your own approach is, it just doesn’t scale, and we need things that scale. So the better way to do it is the way that Peter Singer actually made a lot of utilitarians.
He didn’t do it by giving, by impregnating somebody who gave birth to them. He wrote a bunch of books and I’m doing that. I think we can do it that way successfully. We just go out and make good arguments. And really, if there were, if the problem was, there were a lot of great arguments out there, the theory is, has been decisively argued for, we just need to breed some people who will believe, then it would be a different situation.
A lot of the things we have to do, we have to just, do those things. And then maybe once those are done, there’ll be time to do this, but we just have so many things to do first. And I really think that the “grow your own” approach is inefficient. It doesn’t scale. It’s not going to be a good way to throw our resources at things.
A better way would be, I think, we’d do a lot better if we go to the communities that we’re helping with our global poverty aid, and just be like, “hey by the way, this is us and our theory that’s been distributing the de-worming pills, the bed nets here”. If you’re interested, we can build some education on top of this for you and pipe you into our system as future utilitarians, who’ve been helped.
I really look forward to those people. I want to see the future utilitarians who came up from “yeah, I’m here because of the bed nets. Hello”.
Gus: It’s an absolutely beautiful thought to think of a future effective altruist that was once saved by the efforts of effective altruism.
Neil: They are coming. I promise you, they are on the way. I’m in that quasi-position myself, because goodness knows if I’d be here, if not for a piece of American foreign aid. My parents come from a tiny village in rural India. And there was some grain assistance at a certain point, I think during the sixties or seventies, late sixties, early seventies, there were various crises in the part of India.
Some of this is Peter Singer’s stuff because we’re in West Bengali, which is close to Bangladesh. So I speak the same language as the Bangladeshis that Peter Singer is telling everyone to help. So you got the same blood here and maybe someone’s aid helped, though it wasn’t specifically utilitarians.
But my dad was telling me sacks of grain at one point came in with the American flag on them. And he learned to love America, partly from, “hey, these are the people who feed you when you’re hungry”. That’s great. That’s awesome ways to buy goodwill internationally. For a sack of grain, you get my dad on your side and he comes to the U S and becomes a pharmaceutical chemist. Wonderful.
EAs can do this. And I think that’s going to be so sweet when people like my dad start showing up.
Gus: With regards to the point about children. I agree that it’s more effective to convince people than to create people. I would worry about the marketing of effective altruism or utilitarianism, if it’s widely known that to join this community, you must abstain from having children. I think this eliminates a broad chunk of the population. Given the choice between accepting these values or having children, then the values will be rejected because if something is an innate drive, it’s the desire to have children.
Neil: Right. Yeah. I see that. And once the movement gets big enough, obviously you just can’t like, this couldn’t be a requirement. I would never want it to be a requirement on people in the movement or something like that. That’s just, it just loses people pointlessly. What I do think is, when I talked about ninjas, not everybody who’s doing the good has to be a ninja, but some people think it would be awesome to become a ninja.
The whole idea of it is that you need special people to do these jobs. And right now the jobs that utilitarians need done, there’s some select jobs to prevent existential risk. You need to deal with biotechnology policy so that we don’t get even worse pandemics than we have now that actually can kill the entire human race. Which you could get if someone gets their own “design your own virus kit”, and decided to design the most nasty virus that they ever could. Let’s take COVID and build in some more nasty stuff into the RNA, that’ll totally smash anybody it goes into, yeah. So make sure that nobody does that.
So we have people, I have met some people who are trying to do this, get into the right places in governments to just shut down the “design your own humanity-destroying virus”. Let’s just get that kit to never happen. And there are people working on the project of making sure it never happens. So those are the people.
Once you build up your hero teams, there’s going to be a supporting infrastructure in any proper society that can support this, of ordinary people going about their ordinary lives. And occasionally meeting one of your heroes and being like, “oh, thanks hero for helping out”. And the heroes like, “hey, glad to help” and just rushes off spending his or her or their entire life, doing this kind of task.
That’s in the end where we want to go, but we just need some people right now. There’ll be room for all types once we build this up. I don’t think in the end this is something that just becomes, nobody has children. It’s just while we’re trying to, while we have a small number of people to work with, we do it this way.
Hey, if you want a comment at a different level and there’s some people, I think Jeff Kaufman and Julia Wise, who are EAs, who are doing a good job of just showing you and they, I think have their own sort of giant useful mission. Showing you how to live the EA life, from the parent perspective, to show you that it can be done.
Motivation and community
Gus: The question of integrating our moral philosophy into our lives is maybe the most important, because otherwise what we’re doing when we’re discussing metaethics, it’s just, you could call it a form of a live action role playing if we’re not actually implementing it into our action.
So how do you think about the importance of community here? Because this is, this seems to me to be one of the really important motivators for humans or for people, to be a part of a community with some values in common.
Neil: Right, so I don’t have a general answer to the question of community because, how to develop that community, obviously it depends on what your initial situation is, where you need to go from there. And in some places the EA community functions as a community. And when I go to the Bay Area, when I see some of my friends in the Oxford area, it’s almost like a small religious community in some ways.
And those do succeed sometimes, and they just have to figure out how to do it and do their thing. Yeah, so there’s that and you can build that. You can do that, and we’re going to need communities of other kinds too. We’re going to need EAs integrated into larger communities of non-EAs because very often that’s how it has to be.
And so we’ll just get all kinds of solutions to these problems as people think about it and their specific situation. But on the community and “is metaethics just LARPing” kinds of issues. Look, I think the way that a lot of people do metaethics, it might just be. I can’t answer for the field in general. But the way I came to it, it did come up from something that was very embedded in the world.
The reason that I got so worried about these questions about, can we scientifically prove that something is right and wrong or good and evil, was that I came up against one somewhat genocide-ish guy in my upbringing. That Senator Jesse Helms from North Carolina who was the Senator of the state I lived in from age eight, all the way to end of college. When I come back from college to where my parents live in North Carolina. Here’s the genocide-like thing that he accepted. He was, he was pretty sanguine about AIDS, about the AIDS crisis, because it killed gay people. That was something Jesse Helms really did not mind, he just saw the end of them as something to be wanted.
And that view, if it’s not full on genocide is in that direction. It’s, there’s an entire population that he just wants to see die. And it’s not like he, I don’t think he’d go to actually acting for this, but “hey if God’s taken care of it, why interfere with the hand of the Lord” is the attitude that I was seeing from him.
And yeah, this stuff is out there. He was pushing against funding to deal with the disease for a very long time throughout the nineties, even it’s just at some point he flipped over a little bit on it would allow it, but that was way long into the pandemic. Just, people had died in large numbers while he was blocking funding.
We had that. He was just racially terrible. He was a, he was a segregationist from the Martin Luther King days and had disliked Martin Luther King’s attempts to establish integration of the South. So like racial hatred was just deep in this guy.. And I saw him defeat a black guy who had been the mayor of the biggest city in the state, a highly respected former architect twice in Senate races in 1990 in 1996.
I was 10 and 16 years old at that time. And that was my introduction to the fact that, yeah, evil was a foot here and people love it. People think it’s right. People think it’s the objective moral truth, and we see them now in the South again, with certain kinds of views that could lead America to cease being a democracy, if they succeed at what they’re doing.
Gus: So this might be a good point or point of times to remind ourselves of the dangers of having a community with shared values. If these values are wrong, it can be extremely motivating for people in both, whether or not the values are right. It can be deeply motivating for people. So if your religious community teaches you about the dangers of homosexuality then you might be extremely motivated to do something about it.
And this is of course, something that effective altruists have to look out for too. Avoiding dogmatism, I would say is, should be a main focus, remaining open to criticism.
Neil: I do think that the movement gets like, dogmatists are more likely to be loud about their dogmatism than non-dogmatists are to be loud about their non-dogmatism. So when you look at discourse, you naturally will overestimate the amount of dogmatism and I’m pretty, I’m happy with the community.
I think there’s enough going on that we’ll have a bunch of people who can, and the community is diverse and it’s all over the place. These there’s the Bay Area people, the Oxford people, a bunch of scattered people. We have chapters in Singapore and Hong Kong that I’ve seen, and there’s a bunch of different perspectives that come from their different local environments.
So I feel like the community is being, it’s becoming a network of different opinions. It will prevent all that bad “we’re in this community and we have one view” stuff from going on.
Giving What We Can
Gus: You’re a member of Giving What We Can. What is this, how did you make the choice to become a member or how has your experience of being a member been?
Neil: Yeah. There, it was about 10 years ago. I was already a utilitarian and this philosopher named Rachel Brown at Australia. She’s in Australia. She’s I think teaching at Australian National University now told me, “there’s these people, these effective altruists who are like you and they’ve started a thing and maybe you should join them”.
And I looked at it, I was like, that’s the kind of thing I’m going to be part of. And so I took the pledge and got in and it was just like, “hey I’m a utilitarian, and these people are being good, utilitarians, just let me join them”. And, oh, you’re going to find me some charities to donate to. Okay I have money now because I just had gotten the job in Singapore in 2008.
And I had thought to myself, I want to keep giving away a quarter of my money every year of my salary. And I’ve managed to pretty much stick to that, between charitable and US political causes. Basically I try to hit the pledge over 10% on global poverty stuff, global poverty and animals. And the rest of the quarter is usually US politics. And yeah, that’s that package, yeah. Has a roughly, I wouldn’t say always hit 25%, but I’m 20-25 is a pretty good estimate for me, usually.
Gus: And this hasn’t been disruptive in your life? How has it been easy for you to stick to this?
Neil: Really easy. I didn’t really go in on the hedonic treadmill very much. When I moved up from being a grad student to being a, an assistant professor I only made small moves. That’s my advice to anybody who wants to, be happy with just a little, do it that way. And it’ll work out.
And if you’re, if you’ve got the, if you’ve got the successful career going just live beneath your means at every stage, and I’ve always lived like one career rank below my means, so here I’m an associate professor, who’s bought some nice things and lives like an assistant professor.
I’m feeling all right, or maybe a post-doc even I may even be two levels. So yeah, just keep doing that and you can be happy and not having kids, but also, that’s huge. And when I realized just, okay, I had thought about it. It’s not like I’m personally opposed to the idea.
But, I realized that to do that either I or a woman I was in a relationship with would have to make huge career sacrifices and probably both of us and I just didn’t see how that was going to end up being something that, I could count on happening in a good way. So really the thought was for myself, if somehow that falls into place, go for it, but don’t expect it.
And just go at philosophy because you’re doing well with that. And the relationship stuff I can’t be confident about. Now things are going well. And I have a little cat meowing here because a nice lady has brought me a cat and she’s here too. But we’ll see what happens anyway.
I, that’s how I got to where I am now. And I’ve been, it’s worked out easy for me because I was well set for it to be easy ever since I was 18, I did sort of one track, let me do my do philosophy. Prove utilitarianism, it was all that kind of stuff. It was, I was just on my own mission.
I was running out there with my Katana blade, hiding when I had to hide and slaying things when I had to slay things.
Political action for utilitarians
Gus: Let’s talk about how we should do politics if we are generally utilitarian or at least if we are effective altruists.
So one point to note, before we dig into the system, it’s to say that EA or the effective altruist movement has stayed relatively apolitical. Has tried to stay out of the most murky and most controversial matters. And I think this has been a good thing because the, we want this movement to be inclusive of people with different political views.
We want it to be serious in a way that’s not in the mud of the political issue of the day. And so there’s that, and that’s one thing. The other thing is just the importance of politics for all of the things that effective altruists want to achieve. And so how do we think about the value of staying somewhat apolitical?
Neil: So there is a lot of value in it that I see as strategic value because once you are political, you have enemies and being apolitical avoids enemies. So there’s definitely a lot to be gained by that. It does keep the movement open to lots of different people and that’s good because we just want more people doing the “donate money to global poverty preventing causes” type of thing.
And if we get people of all different views, if we get libertarians doing it because they’re like “this kind of charity is awesome” and they’re they have a special drive to show the power of private donation as opposed to government activity. Let them have their fun in the best possible way.
I don’t think that their ideas about how to set up society are any good. They can’t build a sewer system because that requires government over the top in a way that I don’t, have never seen a libertarian theory that can deal with properly.
But hey look, we’re not talking with them about that right now. We’re talking about with them about global poverty, and yeah. Okay, go ahead. Send the money and I’ll send mine too and we’ll shake hands.
There are places where I think as far as actual goals EA does want some things to happen. And the number one thing that I think EA would like to happen, the effective altruist movement would like, that would just be good for EA goals, is something that can globally prevent dangerous technology from killing us all. All the, what would kill us all threats in some way involves or most of them, I actually, asteroids could knock us out. But for the most part, a whole bunch of the big ones are in the near term, we develop some technology and it smashes us. AI is an example. Pandemics, normally humanity has been through a lot of those, but if someone really manages to up the viruses game, if they get stronger with a distinctively, intentional human help, because somebody wants to kill everybody, that’s dangerous.
I especially worry. This is one of the things that has not so much come up. There’s a version of AI risk, but military AIs are the scariest AIs to me because they are ones that are, they’re not going to have the right safety things attached to them, probably because they need to be built in secret.
Or often they will be, it’s a secret project to build like military technology. You don’t want anyone else copying that. So there is, there, there might be too few eyes looking on this and they might be unsafe. They might have much closer and easier access to weapons systems. And if some country that, some North Korea like country decides, okay, our ticket to power is building a military AI.
And yeah, some really bad stuff could happen.
Gus: Yeah, we should mention briefly why we should expect these risks of extinction to be human caused rather than something that arises from nature, like volcanic activity or asteroids. And in general, it’s the reason is that we can look at how long humanity has survived so far.
And then we can say, that well, if we’ve survived, let’s say 200,000 years, the probability of natural extinction per year must be pretty low that. The stuff that’s about to change, is the human-generated risks such as you mentioned, engineered pandemics, AI risk, maybe a great power war, a nuclear war. These things.
This is what you suggest as the kind of common political project for effective altruists, whatever their kind of politics in the everyday political domain is.
Neil: Yes. Yeah. That is the project that I think the movement has rightly seized on and people have their own assessments of which technology is the most dangerous one, but something along those lines is the big issue.
I would add anything connected to a military, to the list of things, because nuclear weapons were the classic one, and those are still out there. Those are still all over the place. So I worry about those because, if an AI managed to hack its way into those or something like that’s that just gives it an easy way to cause the destruction there might, if you just have these things around it’s a source of trouble.
World government
Neil: Yeah, there’s just a lot of technological danger. What do you do about technological danger? For mitigating technological risks that could kill everybody. You need everybody who is in control of a dangerous technology to well, be sensitive to the consequences of the risk. And the kind of actor who does kill everybody, I imagine is a North Korea type actor who really feels like, there is some risk of us dying too, but overall in the grand strategic calculus, we are the people who will take a one in a hundred risk of absolutely everybody dies to get a...
One in a hundred, everybody dies. 50, 49% chance, the project goes nowhere. 50% chance, we get a significant strategic advantage. I think North Korea goes for it then, and if you get, you can play that, if you play the game a hundred times the chances are pretty good for the end of the world.
Yeah, let’s, you probably don’t end up playing that game a hundred times. It probably maybe time number 45, no more play. If a lot of people are playing that game, it’s over. It could also be, corporations trying to do something to maximize their profits that, has a giant possible cost.
But hey look, if there’s that kind of riches on the line, you’ll take a one in a thousand chance of you dying, yeah. So I don’t want people taking risks on behalf of all of humanity that they personally profit from. And we have a way to stop collective action problems involving that. This is just what regulation and government are there for.
If people are doing collect or basically in collective action problems, you regulate your way. This is how you build a sewer system. You basically, we’re in a collective action problem because we’re all creating waste. That is infectious. We also all have to put our money together to build the pipes and we, we deal with that and now cities are livable.
I actually found out before 1900, before about 1900, cities were net population losers because people would die to all the disease and all the filth and all the waste there. Jared Diamond just has this throwaway observation In Guns, Germs and Steel about that, that, yeah. And then you build sanitation and now cities don’t need to be replenished from the outside.
So it has just an example. I tell my PPE students this, about solve collective action problems with government. Now we have a collective action problem by keeping humanity alive. So how do we solve that? Get a universal regulator over the top of everything to make sure that nobody is making these gambles.
One thing people worry about here with the universal regulator is, will the universal regulator be an authoritarian. That’s a major worry that a lot of people have. There’s so much fiction about, evil people trying to control the world. We did have something that was like an empire that was trying to be the authoritarian world government.
A couple of times in the past century, Nazi Germany, Imperial Japan, USSR, there’s a bunch of entities that were going for that. So yeah, those bad guys are out there. But one funny thing about those bad guys, here’s why just don’t take that threat very seriously at all.
I don’t think authoritarian world government is a serious threat. It’s because of the nuclear era. So try to imagine where the capital over your authoritarian world government is, where is it going to be? Who’s going to be in charge of this thing. China wants it. Russia want theirs to be the capital.
Various Western powers might want it, but how do they get everyone else to agree when everyone else has their own giant supply of world ending nukes. It’s really hard for anyone in a nuclear world to take over, unless they’re going to do some crazy gambles North Korea style, okay. We’ll keep those gamblers out of the game. We need to do that, but really nobody else.
And we found this in the cold war, the US and the USSR could stare each other down for decades and nobody’s going to go at it with these nukes and nobody’s even going to conventionally invade each other. They do proxy wars, proxy wars are nasty, but they do some nasty proxy wars tear up Africa, tear up Latin America, Southeast Asia.
You gotta be lucky and smart to get away from the proxy wars. It’s just, yeah, but really they don’t invade each other’s home territory and they know that if they do that, there’s going to be hell to pay. Nobody wants to do that and possibly trigger the end of the world with all these nukes around. That dynamic is one that I think prevents anyone from becoming an authoritarian ruler who really imposes their will on others because all the others have nukes at this point, or enough of them do that you just cannot get a unified authoritarian control of the world.
And looking at what the authoritarians will want. They’re heavily nationalistic in general, it’s Putin, it’s Xi Jinping. It’s a lot of them want their own power center. And if they’re ethnic nationalists authoritarians like Xi Jinping, and Vladimir Putin, they can’t play well together.
″ Russian nationalism rules the world” and “Chinese nationalism rules the world” are two separate end games that you have to play out against each other and they can have a cold war. That’s probably what happened between them.
The axis powers probably would have a really grim cold war against each other in the end. Cause why really do Hitler and Hirohito want to divide the world between themselves, they can make up fake racial theories that say “we are all one blood Germans and Japanese”. They have to do it for an alliance and nonsense like that goes on when you have to do it for an alliance, but then the alliance goes away because really you want Munich or Tokyo, one or the other, not both.
Gus: Some, at least some Japanese people were honorary Aryans in Hitler’s race pseudoscience.
Neil: Yeah. They have to build a pseudoscience with honorary Aryan-ship.
Once you get a unified controlling Nazi Germany, and a unified controlling Imperial Japan, if World War Two plays out that way, I don’t know if honorary being an Aryan is going to last very long.
There’s going to be some conflict eventually, and it’s not going to hold together.
Getting to world government
Gus: Nations can hold themselves in this kind of game-theoretically stable situation in which there’s mutually assured destruction with nuclear weapons. If these governments are at least somewhat rational and not like a North Korea, so the world, the world you are imagining is it a world in which we get to a world government or is it a world in which there is a level above nation states that kind of regulates the interactions between the nation states. Because if it’s the world government, then I think you lose the advantage of nuclear weapons, keeping everyone in check.
Neil: Good. Good. So let me show you how you get to the world government.
So suppose you have your multiple empire scenario. And this is what I think plays out to this century. So there’s going to be a Chinese empire, ruled in Beijing, there’s going to be a Russian empire ruled in Moscow, and there’s going to be a bunch of liberal democratic entities that play reasonably well with each other.
Which ones they are, things might flow in and out of that, we’ll see whether America can stay a liberal democracy or if certain kinds of people take over there and decide, we want to be strong ethnic nationalists. Really America is the land of the white people and the white blood. What is, this is a unified blood, but they’ll do it that way. Something like that. If they do that, they can set up their own ethnic nationalist empire over there. Maybe in some way this becomes a bunch of racial empires with the Chinese one versus the Russo-American white one or something like, I don’t know what kind of nonsense ends up being here.
Russo-American holds together maybe better than Nazi Germany- Imperial Japan. But let’s have a, let’s have a slightly better scenario, some liberal democracy that includes America holds together. That’s what I’m really playing for in US politics.
And I think this is a point which some EAs, not the movement as a whole, because the movement, there’s some reasons for the movement as a whole to stay nonpartisan and not make enemies, but there are individual EAs who are just like, look, we can not have America fall into authoritarian ethnic nationalism.
And I think that’s absolutely right. You cannot have that if you’re an EA, because it disrupts the path to resolving your international problems. If you have a bunch of empires it’s much easier for a North Korea or a corporation or an entity within one of those things, maybe there’s a secret project being brewed in Moscow or Beijing or in Washington or something like that where somebody’s building something.
And that goes nuts. A lot of our cold war doom scenarios, Dr. Strangelove, is that kind of scenario. They built something in secret and they haven’t told anybody and oops, the way that the thing goes, blows everything up. Okay. So there’s a lot of ways that could play out. But suppose you have your multiple empires and you have a really robust, liberal democratic community as well.
I think what happens here is that liberal democracy wins. It has a winning end game that no authoritarian empire has, because if you’re China, you got to sell the world on Chinese nationalism. And how are you going to sell Moscow on Chinese nationalism?
How is this going to work? Cold war, they tried to do that. The Russians went down to Africa. They’re like “Russian nationalism?” and Africa was like, “no”. And on the other side, you have freedom, democracy, elections coming from the West and some people in Africa like that. So you have to sell things in a way that is not headquartered in Moscow and Beijing. If you want to have the world come along.
And that’s an advantage that liberal democracy has. Liberal democracy can make alliances internationally, much more easily. Ethnic nationalists face an end game that is really hard to win, but liberal democracy, if you get enough of us, here’s what we do. We just say, okay, you over there authoritarian, even ethnic nationalists authoritarian, we have a plan for you.
And this is my plan for any ethnic nationalist authoritarian. Suppose we ended up with everybody’s in except China and China is one ethnic nationalist authoritarianism. They’re the last one. We say to the last group of people there, okay, look, we’re going to buy you out. Here’s the deal. Look, you’re up against us.
And if we fight, it probably ends everything. You have nukes. We have nukes. That’s no good. We can sit here as two separate empires and just stew and stare at each other. But if we can’t get a global framework, maybe North Korea pops up while we’re not looking and just blasts us all because little players, if we don’t have a world government can do that and get away with it.
Here’s what we do. All of you who are in power right now in China, you go like awesome constitutional monarchy. You get to party like celebrities and your descendants get to for as long as you can imagine, having descendants. There is going to be just like, and then you get absorbed into wealthy tabloid celebrity, have fun world, go do that.
And then those of you who really want to run things, just run businesses and stuff that fit within a unified global, liberal democratic structure. So all the selfish players who just want to have fun, you guys go have fun. All those who want to run things, you can run things in the new system and you can actually run some better things because we’ll let you into our system.
Once you’re, playing by rules where everybody can play. And what’s best of all, there is no North Korea like entity that can kill us all. So now you can have security, you can have fun, you can have profits, you can have all the science and progress and dream of the happy world.
You can have all of that. Just let’s put these nukes away and make sure nobody builds them. And nobody builds the “design your own deadly humanity ending virus” kit. So yeah, we’ll do that on global democratic policy, we’ll kill all the nukes. Paradise.
Gus: So is the plan to present these kind of dictatorial world leaders and say that the top of the governments in China and Russia with a way to stay powerful and stay famous and stay rich without actually commandeering these nations in bad directions and in risky directions?
Neil: Yes, that’s the idea, that’s the basic idea. That’s the deal that I want, the liberal democratic international community to offer these various, authoritarian, nationalist entities. And that’s a deal that I think they might take in the end if it’s presented sweetly enough and built up to enough.
And really the other thing you need to do is make sure liberal democracy is powerful enough, to really make that deal as a powerful player. So what I want EAs to do with our, the goodwill, we might’ve gotten in Africa or India with our local people, is just go out there and just raise some kind of like happy pro-liberal democracy, Africans, and Indians, where they are, and just, cultivate that, build that.
And now we’ll have leaders that are supportive of that kind of thing who maybe the EA community has taken lots of efforts to educate, yeah. And just raise some leaders from over there. So that’s like the EA side, but the US and Western Europe side is, let’s try to just be good supportive global actors, do lots of foreign aid, raise goodwill, say, join up with the liberal democracies and do well. Where people, for example where China might have set some developing countries up into debt poverty traps, go to the developing country and find some way to get them out of that trap.
If it involves us buying you out of your debt or something like that, we’ll buy you out of your debt, just come to our side and don’t be trapped by China. So that’s worth the money. If we get an ally out of it, and let’s buy some of these.
Gus: So the value proposition for the dictators around the world: you will stay an aristocratic person and your kids will be taken care of, your descendants forever.
So I, this is an interesting way to look at the problem because leaders are self-interested. And and they’re worried about losing power and maybe getting getting executed, which happens with relative frequency for dictators. One thing, that might prevent this from happening is kind of public repugnance at the idea of the leader of North Korea, who dominated this people and, did horrific things. He is now a kind of star that we follow the life of. Do you think this would be realistic?
Neil: We’ve made deals with bad people many times before, so if it’s more of those and safety’s at the end of it I think we’ll be able to pull this one off.
Gus: Yeah, I agree that it would be a price worth paying. But what about the, so if we convince dictatorial world leaders to give up their nuclear weapons, don’t we lose this advantage of the kind of mutually assured destruction that prevents totalitarianism?
Neil: If that’s the way it’s going. And it’s global democracy, is in there’s one government over the world, democratically elected by every human being in the world. The thing I like about that structure is, now we’ve found a structure within which collective action problems are nicely solved. Everybody within this structure, you won’t have this division where maybe one small actor is taking, one in 1000 risks of destroying everybody for awesome profits.
So if we can stamp out those collective action problems, and it’s global structures that are really good at doing that, where all the people… like, I don’t want to be smashed by some corporation that is risking everybody on Earth’s life for profit. Then I’ll push for regulation to stamp out whatever kind of technology could kill us all.
That’s how you get the nuclear weapon weapons put away. So you have all these authoritarian dictators with their nuclear weapons, and you’re like, let’s fold this into one big system. And they’re like, “okay, we’ll take all the bribes and incentives” and do that. And you’re like, “okay, in this one big system, look, we’re all one country now”. We’re all one country and all the dangerous toys get put away. So yeah.
Centralization verses decentralization
Gus: So the worry here is centralization. And a central point of failure in the leadership of the world government that could result in a previously liberal and democratic world government turning totalitarian. If there’s no option to exit one country for another country, if you dislike the policies of one country. A world with world government is a world without alternatives for citizens in a sense. And so how how would we prevent centralized power from being corrupting?
Neil: Yeah. So one of the things I see in the cases where we’ve had disasters with centralized power when we’ve had them in the sort of nation state period, let’s say over the last 100-200 years. So I think, I, democratic peace theory comes in various versions and claims about the ability of democracy to prevent things like famine come in various versions.
But hey, Amartya Sen won the Nobel prize for a very good reason. And his argument is actually prefigured by Bentham because Bentham thinks if you have democracy, political power will be aligned with total utility because it’s fundamentally controlled by the smart utility-having entities. Now getting the animals in and getting the future people in is tricky, but at least you have something where the present people are good at not getting themselves killed.
Now, there are some places where democracy has done something weird and probably the best example of this, where democracy did something that was unhappy was Nazi Germany, falling back, or being created, by the collapse of the Weimar state. But a funny thing about that case is first of all, you can get all kinds of instability when you have formation of like initially new government.
So we’ll have, I don’t, I can’t boast over the world government that it will be initially easy to set up and there will be some trouble at the beginning. But once the thing gets stable, democracy has a tendency to trundle on, even in the US the only reason Donald Trump won and he got fewer votes and we’d built something bizarre into our constitution that got the fewer votes person winning.
So there was all kinds of imperfect democracy in the first place that allowed this to even happen. So we have cases like that. The other big thing that’s in the Trump case, and in the Hitler case too, is rises of ethnic nationalism. So that’s where you get a majority and ethnic majority party forming very cohesively, and then following dumb ideas that happened to be part of the special set of ideas of that ethnic majority, and then pursuing those. In Trump’s case, it was various forms of racism that his followers were fans of that would get them to want building a wall on the Mexican border.
In the Hitler case, we know what that was. It was German nationalism of a certain kind Aryan nationalism of the type that you saw there. So when you have a world government, ethnic nationalism is a little bit harder because humanity as a whole is running it and mix of entities it’s, you just don’t get the dynamics of “the 90% of us are one way let’s smash all the small minorities and make a pure racial state of us”.
You just couldn’t do that in a global democracy. The votes just don’t work out.
Global totalitarianism
Gus: What about the other side? There’s an in-built protection against the ethnic nationalism in a liberal democratic world government, but is there an inbuilt protection against kind of the Stalinism or communism as we saw it in the USSR?
Could we imagine a situation in which and here I, this, what I’m drawing on here is a, it’s a paper by Brian Caplan from one of the earliest collections of papers on catastrophic risks, in which he worries about totalitarianism as a stable system that prevents us from reaching our potential.
And there might be surveillance technology, imagine extremely advanced surveillance technology that keeps a centralized state in power, even if it’s mistreating its people, because it’s, it could be capable of manipulating the wishes of the population, manipulating what can be said, what cannot be criticized. Yeah. Is this something to worry about?
Neil: I don’t see how this scenario develops. So suppose we’ve already got our liberal democratic world government set up, everybody around the world is voting. They have a secret ballot. They can vote as they want, if the government’s bad in some way, as long as they have the secret ballot and they can vote as they want, they’ll vote for somebody who decides less surveillance if it really is that bad, or surveillance that doesn’t cause whatever problem is being caused.
So as long as we get robust enough democratic structures, we just hold that off. Now maybe there’s some kind of way to break the system into that, but I just don’t see what it is. And then you can, once you have a world government, you can build up even more. If you’re seeing certain potential threats arise of that nature, I don’t see how it would form.
You could just set up some constitutional rights that are really hard to fight through to defend your structure effectively so that there’s a lot of things you can do to prevent that. So I don’t see how that comes up. I can see why somebody would think, okay. We well, yeah.
Here’s one of the things about the Soviet Union situation. Look, what’s going on in Russia in the early 1900s is really grim. This is just not a well-run state. This is a low life expectancy. And if you look at the life expectancy or in early Russia, we’re just talking about, at some points, this is below 40 years. If I recall correctly, definitely both 50.
It’s just, it’s pretty miserable there. And what’s going on with the actual governance you’ve got, okay. You’ve got Rasputin to put it that way. You’ve just got like random clowns showing up, convincing the queen that they can cure the prince and then getting massive political power out of this. This is just not a state that’s run well.
And once people are wealthy enough, they don’t want to participate in bomb-throwing revolutions. They don’t want to do instability. They want to own stocks and they want their daughters to have piano lessons. So yeah, they’re doing that now and they aren’t going to cause trouble. So if you get enough people like that who are good, middle-class, liberal democratic voters who don’t want to do violence, the energy for authoritarian upheaval is much weaker.
Maybe there’s some new way to get that kind of state. Maybe there’s something and maybe an AI could do it in some dangerous way or something like that at some future point. But it doesn’t look anything like the ones we’ve seen before. And I think those early models are just so, they’re obsolete models for how to set it up.
There might be some future model that can set it up and people need to look into that. But really if the kind of power that could set that up within a liberal democratic world government, is a kind of power that can probably blow up the world under the other. I just don’t see that we are losing on even the totalitarian world government scenario, because the other scenario seems to be destruction. If we don’t go for world government, we don’t survive the centuries. As far as I can tell.
Gus: Let me paint you a scenario, and I don’t know how credible the scenario is, but imagine we have a liberal democratic world government and we set up a, we set up an agency within this government to monitor for engineered pandemics. And for this purpose, we’re interested in checking the communication of everyone, whether they’re sending these strings of information that could potentially be turned into deadly viruses.
Okay? Imagine that this agency wants to uphold its own existence. It wants to continue existing and continue getting funding, and we want to become promoted within the agency. So again, we’re thinking of governments, government agents, as all agents, as self-interested.
And so imagine they, for this purpose, they must continually find risks and the definitions of what the risks are, are expanding. And so you could imagine this becoming, like a secret police with future AI technology to, to check everyone’s communication continually. And then say that the leaders of the world government are not especially satisfied when people critique them. Why not stamp down on this to stay in power. Again, I don’t know how credible this is, but it’s one possible scenario, maybe.
Neil: What I’m seeing in this is the power of universal surveillance must be controlled very tightly. We’ve got to make sure no agency gets able to do that.
Now there are tricky things in making sure that agencies can’t do that especially as they could do it very quietly. That’s the thing that’s really risky. I don’t have a solution to the quietness problem immediately, and I’d require some people who are smart about technology to look into what you’d have to do about that.
And definitely the future of governance structures. We can even design future governance structures that, anybody who has that power has to share it with N other entities or something like that, who can also check the work and lots of checks and balances would have to be in the system.
One nice thing that humanity has achieved to some extent is that some societies are pretty good at keeping the deadly weapons that you can use to kill civilians from being used by the people whose hands there aren’t in the kill, the civilians and societies are better and worse at this. The police violence situation in the US is an example of some people getting out of hand with this and problems result.
But you can regulate properly on the immediate ability to kill. And that’s a really powerful ability, if that gets out of hand, I actually worry a little bit that in the US it might be a little bit out of hand, if politicians are somewhat afraid, the police might shoot them or something like that. If they talk against police interests.
So there may be something of that kind going on in certain places in the US, I worry. And if you get, let something have that kind of power, it’s dangerous, but, we have ways in some places of properly making sure that if anyone’s going to use that kind of power, they have to explain and answer to a whole bunch of other people about what they’re using.
So if you build up those kinds of structures, and places successfully have. Not, everything is an immediate gun to your head state in the world. So if we can just build it up like that, use whatever checks and balances system, a suitably modified, that we use to control all the arms and put it over the surveillance. Seems pretty good to me.
Gus: Yep. Okay. So a world government, if it’s liberal and democratic would definitely be an improvement for billions of people around the world. One other worry is that government, like this would be stable in a way that prevents experimentation. So I see progress as basically conjecture and refutation, so we want experimentation in government structures and in ways of regulating society, and maybe a world government will prevent this. Maybe we wouldn’t have information about which system s are better than others because we cannot experiment.
Neil: This is something I think to keep in mind when designing global governance structures. But one thing that I think would at least mitigate that worry would be just understanding what the function of the world government is and what the function of local governance would be. So the function of the world government, as I see it is to solve collective action problems that affect the entire world.
And these technological risks are examples of collective action problems. Militaries are things that we have because we have borders across which hostile entities might be. You get rid of them. Defense spending goes to zero, unleashes so much money. I really like that part of world government.
So we can just, now spend it on fun stuff and on science and keeping us safe and universities and whatever, all that will stay up. Now when we have a world government solving these collective action problems, there’s plenty of room for all kinds of local governments to do things like managing local city infrastructure, because really, the world government, why would it need to do that?
And if people want to have in China, if they want to have their festivals a certain way, and in France, if they want to have different festivals. Fine. Local governments will put those on and all kinds of things, can be done, just which body is the best at handling this. Collective action problems that affect everybody go to the world. Stuff in your town, go to the town.
And so you’ll have experimentation between all the little towns and cities and all that. And then they can pick up the best from each other, but you might lose experimentation in how to solve global collective action problems. We should be slow in that because that’s what we’re doing to prevent the end of the world. Going a little bit slow on things that, if we get them wrong, we don’t get another chance is I think justified.
Gus: I’m genuinely undecided about this issue. I find it extremely difficult to reason about the trade-offs in centralization versus decentralization. It’s a, I think it’s an evolving topic. I think effective altruism needs to think more about this issue.
Neil: Yeah. I’ll, give my support for democratic centralism to solve collective action problems.
What is EA doing wrong?
Gus: Let’s end by discussing what effective altruism is doing wrong. This is one of the favorite topics for my listeners, is to hear how they’re wrong. You’ve been a member of this movement for let’s say a decade. So what’s interesting would be to know where your views diverge the most from the effective altruism mainstream views.
Neil: One mistake is to think that effective altruists are unified enough that there’s one thing we can all be doing wrong. Yeah.
Gus: True. True.
Neil: Yeah. As a global EA myself, I’ve just been through so many different communities and, the two big centers of gravity for EA. Oxford where a bunch of Giving What We Can Stuff is headquartered, and the Bay Area where a lot of our pro -EA billionaires and AI people are. So we have these two centers. And the Bay Area culture, I think, is something that a lot of people have a bunch of feelings about, that does express itself in the movement.
But, we have a lot of other places too, and I’ve seen EA chapters in Hong Kong and Singapore and all over the world. When our Africans get going I look forward to that, for Indians get going, especially I just look forward to the EA kids of the future, the ones who like are just saved generations, who they themselves, took the deworming pills.
They themselves have the bed nets. Oh, when they rise, it’ll be beautiful. And they’ll just have a diverse population of people. It’s just moving that way. The ideas are mainstreaming. And as they do, we just get more diverse. So that’s really good. And I’m just, I’m very optimistic about the movement. There is a lot of little things that I could pick on here and there.
I wish more people were into your naturalistic realist utilitarian combination in the philosophy, too many anti-realists out there in the, in the real world EA side who don’t know how much objective and universal moral value is worth for the project. And too many non-naturalists in the meta ethics side, who have bought into the dogmas of the past 25, 30 years in metaethics and don’t know that good old British empiricism with Hume and Hutcheson and all these good folks, Adam Smith and Locke are back there to save the day and get us into the ethics of a Mill and Bentham. That’s all out there, but, yeah I’m optimistic about the movement.
I really think we do have a lot of good people pushing us in a good direction, and we’re only going to get better ones as time passes. Sorry to disappoint, but we’re doing really well.
Gus: Okay. There’s been a development in effective altruism from the early beginnings. There was a lot of focus on global health charities. Rating the effectiveness of global health interventions. As the decade has passed, we’ve seen more focus go towards preventing existential risks from nuclear, from pandemics, from AI. And I don’t want to overstate this because the original, kind of the origins of EA involves people such as Eliezer Yudkowsky who was very into AI risk in the, mid-2000s. One of the central figures Holden Karnofsky, William MacAskill, Toby Ord were aware of these risks and interested in them from the beginning.
Do you see EA changing focus, again, as radically as we might characterize it as having changed. Is there some other place we could land? Is there some development towards another focus that we havn’t yet discovered?
Neil: I actually have suggested one to you just in the last hour or so, which is a political focus towards global liberal democracy. That’s where I think the longtermist solution pushes us. The global poverty people I think will push us there to actually help us get there by cashing up the poor, because it’s, it’ll easier to just get liberal democracy with Africa being an economically productive, reasonably well-to-do place. Because it’s just, when you have inequality of a massive kind just getting a unified structure around everybody is tough because the rich Westerners are like, do I have to give away all my money to the Africans?
How do we get one unified political system for people who might believe completely different things because they, have different educational situations back there. Okay, so you have much more trouble there, but suppose you just raise everybody to a similar level. Now, global poverty is coming together with the international cooperation people, and now we’re working together to do the long-term thing.
So I actually see those two sides as unified in the kind of project that I’ve suggested. And once you get liberal democracy globally, and once Africa gets to vote on social welfare policy, all right, now some money goes out there and we can do it in a proper formalized, government run kind of way.
And get all the sort of power of that behind it, get all the efficiency that you can get once you really try to do that. Give people a good. Some kids growing up with unfortunate parents situation and doesn’t have enough resources, make sure the kid gets food, and gets all the resources that are needed.
Children growing up in poverty is just human capital on fire. Just put out the fire. This is just a huge thing that needs to be done. I’m for massive redistribution. Destroy incentives and put out that human capital fire that we’ve got burning in the world.
That’s my dad, goodness knows what happens to him if he doesn’t eat. And then I might not get born, I don’t really know what happens in the sixties and seventies. He probably survives that, maybe he marries my mom, but maybe he like, doesn’t do well in school because he couldn’t eat and just falls out of school. You know Could happen.
Yeah. So right. It’s just, there’s so much we could do where I think the EA foci are coming together, in a nice way. If you want to deal with wild animal suffering, having a world government is going to make it a lot easier. So yeah, there’s just all kinds of problems that gets solved together at once by the structure.
And I really see the EA movement, maybe it discovers some new cause areas, but I don’t see anything that pushes it towards some radical tearing apart. We have disagreement, we’ll have disagreement. So far, I’ve seen healthy disagreement. There’s a bunch of dogmatic people who are dogmatic, about AI being the big thing, global poverty being the big thing and AI is stupid or animals. There’s dogmatic people, but there’s a big core people who understand that all these causes are important in some way.
And are just thinking about them together. And I think that’s the way to think about them.
Gus: Thank you for spending this time with me. Your enthusiasm is wonderful. The way you present your ideas, it’s convincing with with enthusiasm.
Neil: I’m honored to hear that, Gus, thank you so much for giving me this opportunity.
Neil Sinhababu on metaethics and world government for reducing existential risk
Link post
On this episode of the Utilitarian Podcast I talk with Neil Sinhababu. Neil is a professor of philosophy at the University of Singapore. Our conversation has two broad topics. We talk about metaethics and we talk about a world government as a way to prevent human extinction.
We discuss consciousness as the basis for ethics, reductionism about ethics, whether morality can be a science and how to handle feeling alienated from your own values. We then discuss world government as a way to solve collective action problems and to decrease extinction risk. And I ask whether creating a world government is itself risky because it might turn totalitarian.
The sound quality on my side is not the best in this episode, but fortunately, Neil has lots of interesting things to say, so he speaks the most.
As always you can reach me at utilitarianpodcast@gmail.com, if you have questions or suggestions or criticism.
Here’s the transcript:
Gus: Neil, thank you for coming on the Utilitarian Podcast.
Neil: Wonderful to be here, Gus. Thanks for inviting me.
Intro to metaethics
Gus: Great. Okay. We’re going to talk about metaethics and just to begin with, could you introduce metaethics?
Neil: Yeah. In normative ethics or ethics is people usually think about it, we’re trying to figure out which actions are right and wrong, which states of affairs are good and should be created, which ones are bad and should be avoided, which people are the good virtuous people and which ones are the bad people. And there’s also broader social questions about what justice is that blend into political philosophy at some point.
But meta ethics is where we ask some questions about the answers to those questions. So questions of, if it really is wrong to lie is that an objective fact about the world? And if so, is it an objective fact that we can find in some scientific way that’s about concrete stuff in the world that is empirically knowable?
Or is it something that we have to do something else in order to know, for example that we would have to use a special kind of intuition that we use say in mathematics, according to some especially Platonist theories of mathematics or the old traditional way of saying it was beyond science was to say that only God could set up good and evil.
So yeah, there’s a lot of options. If they are objective facts, how they could be objective facts. And there’s a lot of options where they aren’t objective facts, maybe our moral judgments are just expressions of our desires or emotions. And in that case, it’s fine to have moral judgments that don’t correspond to objective truth. Your desires usually don’t correspond to objective truth.
And if you’re just saying hooray for that, Hey no objectivity requirements. I think your moral judgments might be just fine because they’re just mere hoorays and boos. So that was the old non-cognitivist view. People have moved to more sophisticated things, but there’s a lot of options. There’s also the relativist option.
Maybe these aren’t objective and they’re like fashion. They’re just the preferences of a culture at a time. So all kinds of things you could say. Really among the views that say they aren’t objective, my favorite one is error theory that says there are just no moral facts, nothing non-objective could live up to being morality.
So really since we just have a bunch of non-objective things where morality supposed to be, we really have to say there is no moral truth because it just doesn’t live up to what we thought moral truth would be. And that seems to me, a very respectable kind of view.
Is metaethics useful?
Gus: And so I’ll be skeptical about metaethics as a field and ask: why should I care?
Why should people in general care about metaethics? Isn’t this some very academic endeavor that doesn’t really affect what we’re trying to do in the world?
Neil: It would be like that if it didn’t affect normative ethics, and a lot of people have pursued metaethics in a way that separates these two domains, where they say, here, we’re just trying to do something like, well in philosophy of mathematics, whatever the philosophers of mathematics say, the actual mathematicians, it’s not going to affect them.
They just go on and do what they’re doing. And the philosophers will argue about whether they are engaging with Plato’s forms or whether they’re just manipulating symbols, but either way the math, the mathematical truths come out the same. It’s just a question of whether they are a Plato’s forms truths, or whether they’re, truths about the symbols.
So some people see it that way. And I don’t think that it’s all going to be the same, normative ethics wise, how the metaethics turns out, because if there are objective moral facts and they’re something like scientific facts, in that case, the right way to investigate them will be something scientific and perhaps a natural thing to expect is that they’d surprise us as much as science surprises us.
When we discovered that water isn’t a simple substance, it’s actually, if you took two different gas molecules take the hydrogen gas, split it in half, take the two atoms, take the oxygen, split it in half, take one of those two atoms and stick them all together.
That’s what water is. Okay. That’s surprising. Nobody thought it would be like pieces of things we know of as gases that would have astonished people in the 1700s. But so it is, and maybe ethics, if it turns out to be a scientific kind of endeavor, we will find the surprises. If it can be empirically known, if it’s about concrete objectives or stuff that can be surprising to us.
And that’s how I think ethics in the end turns out. It’s not something that matches our intuitions very neatly and approaches on which it matches our intuitions, you’re either giving the mind amazing powers to figure out moral truths or you’re weakening the moral truth so they can just be shadows of your thoughts and giving up on objectivity and I’m not happy with either of those options.
I don’t want to make the heroic assumptions that allow for us to know these amazing facts. And I don’t think we can settle for real ethics that isn’t objective. A true morality would have to be all of those, knowable in some way that is genuinely knowable and not through some kind of faking and it would have to be genuinely objective and universal.
Gus: Another objection would be to think that whatever the metaethical truth, we can all agree on what we want the world to look like in practice. So we want, everyone wants a flourishing society and we can work towards that as a vague goal without settling these difficult philosophical issues.
Neil: So there might be situations in which that is possible, situations in which all the reasonable, ethical views of point in the same direction. We would be lucky as a political community, I suppose if things turned out that way. And I don’t think things actually turn out that way.
I think moral disagreement is in fact in politics and in society today reasonably widespread. And if you look historically not at the sort of narrow sets of views that we have in our current time that I guess the current state of the world imposes some constraints on there are certain kinds of views that you couldn’t have and participate in the modern global economy, views that are so hostile to outsiders that they just prevent you from engaging with others.
Those, those views have been held at some level in the past. A lot of societies go to the point where they think it’s okay to commit genocides against others and even heroic to commit genocide, even obligatory to commit genocide. That really weighs heavily on me in a way that it doesn’t weigh on I think a lot of people working in metaethics.
The way I see what we’re supposed to do in normative ethics is we have to get, we can’t just sit on this idea that our intuitions are very widely shared and that lots of people have nice intuitions like us. Just a look back before 1945, not just the regimes immediately before, but the entire sweep of human history before just shows you lots of people who think killing others really is the heroic thing to do, killing entire other societies.
It’s just, it’s a crazy world out there. It’s actually only around that time around world war two, a little bit before that Raphael Lemkin coins the term genocide because there isn’t even a term that carries the kind of weight that we have now for that.
It’s funny, Lemkin is thinking of an alternate word for it first, and at first he tries out vandalism as a word for genocide. What a strange linguistic fact that we switched over to the other word now. And yeah, that’s what it is. It’s just, yeah, the past is just monstrous at some levels. And that kind of monstrosity shows us how far human moral views can diverge how important it is to try to find something that will take us reliably towards the truth, because I don’t trust the intuitions of a species that falls into pro-genocide views, as often as humans do.
Naturalistic realism
Gus: Good point. If we look at the philosophical community, I see two broad tents, we could say, of views. One is realist and non-naturalist, and the other is naturalist and anti-realist. And I am worried about people perceiving this as a dichotomy where you can either be a moral realist or you can be a naturalist.
Is there a way forward for a naturalistic realism?
Neil: Gus, you have spoken to my heart. The project with which I began philosophy as like an 18 year old, when I decided to major in the subject, was; can I find the moral truth, the truth about good and evil in the natural world in a way continuous with the sciences, broadly speaking. I didn’t even really understand that was what I was trying to do, but that was just what the aim was from the beginning.
Broadly empiricist kind of way of finding objective moral truth is what I was interested in. And I think that can be done. Metaethics today really doesn’t think it can be done. And your description of what the field is like at present is I think accurate to the way it has moved over the last 30 years.
And to go into why it moved that way, I think the focus on reasons for action as the fundamental thing to look for in metaethics, really made that happen because the way reasons for actions are just lend themselves to non-naturalist treatments. The kinds of things you’d see in Jonathan Dancy, Tim Scanlon, many other philosophers go that way.
And there’s also anti-realist ways. I feel like Christine Korsgaard did a lot to suggest something that really took the psychology of reflection and deliberation seriously. And it seems that the most natural way to develop that was just to go anti-realist with the psychology of reflection and deliberation being given a certain kind of non-cognitivist interpretation or something like that.
So there were views like that out there too. People also do the deliberation thing in a non-naturalist way, but at any rate, that’s the kind of set of approaches we have. You start out from judgments about reasons and you give them either a non-natural metaethical treatment where they’re describing abstract facts that you only know through reason, the Plato’s forms kind of thing.
David, Enoch describes himself as a Platonist and that’s the kind of picture we have there. Michael Huemer has similar views. All those, the whole range of views is available on one side, or you say really, we can’t figure out how this is describing the natural world. And we don’t want to go in for the non-naturalism if we’re naturalists about the metaphysics and then you go for anti-realism, like you said, and non-cognitivists are doing this.
Reasons for action
Gus: So these two camps, you think there’s a kind of a third way?
Neil: Absolutely. And one of the things I want to do to get there is push back against the idea that what we’re doing in metaethics is trying to characterize reasons for action. And I want to go into why that’s really not a good way to go here. I think there’s a big problem with understanding action as fundamental metaethically, because the psychology that engages with action most directly isn’t the psychology of belief.
And my work on the Humean theory of motivation was really where this came out to me very clearly. So if you look at the production of action, the real things that seem to direct and drive us, are our desires or as Hume called, them our passions and the role of belief in the direction of action is just I have this means-end belief about this is how I attain the end that I desire or have a passion for.
So the direction seems to be coming from passion or desire. And the rule of belief is just, okay if you want that, this is how you get it. Now, what reasons for action are supposed to do is, they’re supposed to direct your action. They’re supposed to be the things that pick which action you’re going to do.
And not just, this is how I, find the means to achieve my antecedently desired end. No they’re supposed to set your end, set the goals of action. Now here’s the problem with this inhuman psychology. The way that humans are getting the goals of action, it seems to just be, they have these desires and the desires drive them.
Michael Smith and some other philosophers have thought the way it actually happens is you have a belief about reasons for action or some other thing. And this, you can have desires driving you, but there is a way to get motivation. If you have a belief about reasons for action, that can just create a desire and then the desire will drive you. And so Smith presents himself as a Humean theorist about motivation. For that reason, he says I’m doing the desire belief thing too, but he allows a belief to generate desires by reasoning.
So ultimately it’s belief about reasons that’s driving us and Smith’s view was regarded as a Humean desire-belief view in good standing for a very long time. And still largely is. The thing that happened there though, it made it look like the desire-belief psychology of human action. It made it look like that psychology was compatible with the reasons for action view, because you just get a belief about a reason, and that would create a new desire just automatically through, not automatically, but through quick, simple inference, the way inference usually works.
And you just get this and if you didn’t do it, you were irrational or something. That’s how Smith sees it. If you can’t, if you think you have a reason to do this, but you weren’t motivated, well then you have akrasia, a weakness of will and you’re irrational. And that was the way that Smith thought about it and the way that a lot of people thought about it.
Even those who call themselves Humeans, but looking at actual human motivation, I don’t think that’s the case and the really good empirical case for this. I’m not going to go to any of the psychology that got into trouble with the replication crisis. A much stronger case is provided by the failure of gay conversion therapy.
As I see it, because this was a project by people who really were invested in changing people’s desires, in some cases in line with their moral beliefs, because often people who were set in for this, many were unwilling and forced in, in some kind of way. But there were people who were doing this because they wanted to be right with God and right with the holy way of doing things and what was good and what was right and what was virtuous, that’s how they saw what they were doing.
And they wanted to get rid of their homosexual desires. They had the moral beliefs, the beliefs, that there is a reason, all different kinds of moral reasons, practical reasons, dealing with the afterlife, any kind of reason you want to put in there, you can find it there. A reason to have different desires, to do different actions, to be heterosexual, to get married to someone of the opposite gender, all kinds of things you can put them in, however you want.
And the attempt to generate new desires, new passions from these moral beliefs completely failed. Even with the experimenters, as it were, being completely motivated to get their result. This was a case where they really wanted to get that result or else what they were doing is complete garbage and it was complete garbage.
Their result could not be found. It completely failed. The only people who believe in this are people now within the evangelical network. And that’s not where you want to be, I think, as any kind of naturalistically inclined person. So that’s really where I see the failure of moral belief to generate certain kinds of practical consequences that were boasted for it.
I don’t think the content of moral belief can be analyzed as fundamentally practical or action-guiding because we’re just not seeing the outputs from the moral attitudes that suggest that. They don’t directly motivate actions and they don’t do the Michael Smith thing of generating new desires through reasoning.
So there is really nothing here. No case here for the content of moral belief to be a distinctively, practical, motivational thing in the way that anti-Humeans and even Michael Smith and the other Humeans who allow that intermediate position suggest. So really I think this, that whole project of understanding moral judgment as fundamentally practical is empirically now defeated.
Feelings as the basis of morality
Neil: What we need to go do is go in a different way. Not the way that Immanuel Kant went, where he thought all these judgments were about reasons for action. It’s about feeling. It’s about the perceptual, experiential side. That’s really where the content of moral judgment I think is to be found. And when you look at human psychology, you just see that the perceptual, experiential side that’s incredibly fertile, tons of stuff is going on there.
Perception goes on in the human mind so many times per second, perceptual belief formation. Right now, I’m having all kinds of beliefs immediately in the moment formed about the face of Gus and the thoughts of Gus as he looks back at me and I see myself and I’m like, oh, I’m moving around a lot. You watch a sports game or something like that, where things are moving and your beliefs are just flickering so fast moving.
That’s where you find fast activity in belief that is just going on a whole lot, nothing really strained about it. The human mind is just set up to deliver perception to belief and the way it does that is some content comes in perception. You represent the world a certain way, and then you believe that the world is the way you see it.
That’s what you usually do. There are a couple of cases where maybe, you have some reason to doubt that the world is the way it seems, but for the most part, the world seems a way, and you just take that into belief. Once you pay attention to how the world seems. Now, what happens when a feeling like guilt comes in perceptually?
It’s an experience just like my experience of my shirt as blue. If you have an experience of a bluish color on my shirt or a sort of a yellowish color on the map behind me, you form beliefs about the map being yellow, my shirt being blue, as a result. So that is just how beliefs are quickly formed.
And if a feeling of guilt comes in about something, you did, you remember. I have this often, I remember something I did 15 years ago where I said something, that was mean that I didn’t realize it was mean. And I’m like, oh, I have that feeling. And in there it just looks to me like I did something wrong, so clearly.
Wish I hadn’t done that. And you feel, you believe that you said something wrong, back then when that feeling strikes you, because that’s how it looks. That’s how I think moral judgment really works. And now we’re not thinking about the content of moral judgment as okay, this is action-guiding, even though it’s about an action.
Really, the way tounderstand it. It’s like color. It’s perceptual. A feeling came in and I believed that what there was in the world was something that matched my feeling, a wrongness in the action. And that’s what I see wrongness as. It’s not, fundamentally, there is a universal or categorical reason not to do this. You can build that up at some level, but that doesn’t characterize the fundamental nature of the thing.
It’s that this action has the disgusting color of guilt. Is this basically what it is. So that’s how I see the content of moral judgment. It’s really a perceptual thing to be understood along the lines of color. And if you do it that way now you’re not in the psychology of action. You’re in the psychology of perception and in the psychology of perception, there’s a lot more opportunities to get realism going because all this perceptual content can be evaluated for accuracy which is just one step removed from truth.
When it’s in perception, you call it accuracy. When it’s in belief, you call it truth. Your perception of there being a map behind me. You immediately form the belief. There is a map behind me. The perceptual state could be accurate or inaccurate depending on what it corresponds with, whether it corresponds with reality, the belief can be true or false depending on whether it corresponds with reality.
And I see what we’re trying to do in getting objectivity, not well, truth is what objectivity looks like in belief, but in perception, accuracy can be objectivity because it’s correspondence fundamentally, your mind corresponding to reality is just a really awesome strong kind of objectivity. And let’s go for that.
Let’s try to get these judgements about; I should feel guilty about that, or guilt is the thing to feel about that action I did, or about this beautiful future that could be, hope is the feeling to have. I’m a utilitarian, I like some of these Brave New World-ish futures and other people are like, no, that’s horrible.
They see it with horror. The question to be asked is this future something to hope for, or to be horrified by? And that’s where, in these science fiction futures where things are strange to many people, they’re horrified that I hope for that because there’s more pleasure in it. Which is the right feeling to have, I think is the right way to understand the content of moral judgment and not the reasons for action framework that metaethicists for the last 30 years have been digging themselves deeper and deeper into.
I think there is nothing there for finding any sort of important, interesting, moral truth. You can go anti-realist in the end and that’s all you can do, but there is objective and universal truth to be found in the naturalistic realist way. If you look at this as fundamentally perceptual.
Accurate feelings
Gus: So what I believe is that what’s fundamental here in these feelings is pain and pleasure and more complex feelings such as guilt or horror is colored by pain and pleasure. So we can separate feelings into feelings that feel good or feelings that feel bad. And this feeling of goodness is the central fundamental objective value, that’s within our conscious experience. And so do you agree that pain and pleasure is the fundamental value and guilt and horror, for example, you could call them secondary or more complex phenomena.
Neil: Absolutely. I think that the pleasure and displeasure, as like to say just to get beyond the bodily connotations of pain, but you’re getting it basically, yes. I think those are, as far as moral facts go, I am a utilitarian, so I take pleasure and displeasure to be the fundamental value and disvalue that there are, I’m a hedonic utilitarian.
Yes, that’s right. And the interesting thing about our positively and negatively valenced moral judgments is that all of them, as far as I can tell, follow this rule where if they’re positively valenced, if they present an action as right, state of affairs as good, a person as virtuous. In that case they’re pleasant.
The feelings that we have, that reveal the world to us, that way are pleasant feelings. So if you think about the future and this future state of affairs, perhaps where there are people living in a very strange way, but having a whole lot of pleasure, maybe they’re all in the experience machine or something like that.
And they’re all very happy. I think that’s good. And to get into why I think that’s something to hope for. This is where the feeling-view can get correspondence going in a way that is just not even a possibility on the reasons for action view. Suppose I hope for the experience machine future.
The core thing in my feeling of hope that makes it a positively valenced representation is the pleasure in it. I am pleased by that possibility. Now feelings like horror. If you’re horrified by everybody in the experience machine you’re displeased, it couldn’t be horror with just pure pleasure. If there was no displeasure in it, it would be something other than horror.
To be hope, it has to be pleasant and to be horror, it has to be unpleasant or at least push your pleasure up or push your pleasure down. That’s what hope and horror do I think. So that is fundamental to their nature. And without that they aren’t moral feelings. So why should we hope for the experience machine future?
In the experience machine future, by stipulation, there’s a lot more pleasure. And if you hope for it, the hope in your feelings is in objective correspondence with the thing it represents, the experience machine future. There’s pleasure in your feeling. There is pleasure in the future. And I think that match is what makes hoping for the experience machine future objectively accurate. There’s pleasure in both. Match. But the person who is horrified by the experience machine future has a negative judgment and displeasure judgment, a displeasure-laden judgment about a pleasure-laden scenario, and that’s a mismatch.
And that’s why that person is out of correspondence with reality, similar for the person who is neutral about the experience machine, future, that person isn’t so horribly mismatched, but there’s still a mismatch.
So yeah, that’s how I get hedonism out of the positive valence, negative valence neutrality framework and get it to correspond, to pleasure, neutrality and displeasure in the world. More pleasure in reality is the thing to hope for, creating more pleasure is something to be proud of, being the kind of person or those kinds of people who are disposed to create more pleasure, those are the people to admire.
And all those feelings: hope, pride and admiration are pleasant. They correspond and match. Meanwhile, a horror at great suffering in the future, guilt about causing displeasure, hatred of those who would intentionally cause displeasure to others, and contempt towards those who just systematically cause displeasure to others perhaps unintentionally, but just by being really, dumb and mean or something like that.
I guess if they’re mean hatred is more it, but if they’re just careless about it and just causing displeasure all over the place, contempt could be the attitude. There’s a match between the pleasure and displeasure in the attitude and the pleasure and displeasure of the world.
And I think that’s what the correspondence between the mind and the world and ethics is fundamentally grounded in.
Objectivity
Gus: What you’re doing, or what I see you as doing, is understanding the complex moral concepts, such as guilt and horror in terms of the basic moral concept, that pleasure is goodness. It’s very interesting to me, this fundamental moral fact that the pleasure is goodness. How could this be an objective fact?
Neil: Good. So it’s objectivity if we define that the way, for example, Sharon Street defines objectivity. She calls it attitude, independence, or stance independence.
So that’s it’s good independently of what anyone thinks or feels about it. I think the goodness of pleasure is that way, it’s good independently of what anyone actually does think or feel about it. So suppose we’re in a world where everybody is horrified by the experience machine future, where everybody is experiencing a lot of pleasure, but they’re disconnected from reality.
Okay. That doesn’t make the experience machine future bad, because in that future, there still is a lot of pleasure. And what are our people doing? They’re having the down judgment towards the up thing. Mismatch. They’re wrong. Their judgements, no matter what people’s judgements are, they don’t change the truth.
So it looks that way like objectivity and that’s what proper objectivity, is supposed to be. Now as theorists, there’s something we can do to discover this. And there, you might begin in the feelings, but what we’re doing there, just to understand why this doesn’t lose objectivity. Let’s just think about how you’d figure out the truth of a belief.
Now to know whether a belief is true we’d have to know what the content of the belief is. If we just say “Gus has a belief” and ask Neil, “Neil is Gus’, belief true?” , come on and tell me what the belief is. Then I can give you a better answer, yeah, you gotta go to the content first for me to give a good answer. We’ve got to figure out, if we’re talking about, are moral judgments, true or false, or can they correspond to things in the world. We’ve got to investigate what their content is, or we’re not doing a good job.
So I think here, what I’m trying to do with all this psychological background of the gay conversion therapy example is trying to get clear on what the content of moral judgment is. Is it really the practical thing that say Christine Korsgaard, Michael Smith, Tim Scanlon, all the big metaethicists almost of the last 30 years they’ve told us, and I think they’re wrong.
They really did not follow the lesson of gay conversion therapy, which was out there to be seen by everybody. And it shows them that moral judgment when you have, it just is practically not very powerful. It really doesn’t have any practical oomph of its own. That all comes from a desire one way or the other.
And so now let’s do what we do when we have things formed by perception, let’s investigate whether the perception was accurate. That would be a nice way to get a handle on this and understand what makes these perceptions accurate because the content of the perception is going to be absorbed by the content of the belief.
That’s how it is with color. You see a blue thing, you think the thing is blue, you see a yellow thing, you think the thing is yellow. You see a thing that is misleadingly colored, maybe a white thing in red light and you form a false belief about it, maybe, we need to investigate it that way and see what’s going on with people’s beliefs, understand the content.
And once we understand the content “this belief is about blue”. “This one is about yellow”. “This one’s about red”, and “this one is about what to feel guilty about”. Once we see all that, we can look for the right things in the world and see if the world matches up. And while the reasons for action views were having trouble matching up with the world, because where are the reasons for action, you know, we’ve invented something here that I don’t think fits naturally into the world.
When we look for accuracy conditions for feeling, understand them in terms of correspondence with the perceptual states that led the belief, namely these feelings, the phenomenology of emotion, guilt, hope, horror, admiration, hatred, contempt, pride, all that. We just try to match the feelings to reality, and we find objective universal matches.
That’s the ethical truth that’s to be found there. And it’s a hedonistic ethical truth. That’s what the simple stripped down way of looking at the natural world gives you. The scientific worldview, as far as I can tell, gives you a universal and objective morality, and it is hedonic utilitarianism.
Gus: Take the feeling of pain. When we are experiencing pain, how could this be attitude independent? Isn’t that exactly a reaction to the world that’s as subjective as they come? So, if I’ve burned my hand, for example I feel pain and this is a subjective attitude. How could this be the grounds of an objective morality?
Neil: You’re right. That is a subjective attitude. And at some level, I don’t think that attitude is the grounds of an objective morality. The attitude of pain as such, isn’t really anything I emphasize that heavily. Rather let’s just look at the feelings involved. Just the qualia of perhaps might be the way to talk about this because that’s where I’d see the pleasure and the displeasure.
They’re just experiences, feelings, just like brightness or volume or something like that. There is just dimensions of experience like that or components of experience. That’s the place to find this. Okay. Now let’s look at pain just to get into the answer to your question. I want to go into the ways that pain really is subjective, like you’re saying.
Suppose I touched something hot and feel pain maybe a different kind of creature that was used to very high heat might not feel any pain at that point. So yeah, in a way that is painful, touching that thing is painful. That is subjective. Utilitarianism wasn’t a theory about the rightness or wrongness or goodness or badness of touching that thing, which is subjectively painful, it’s about the goodness of pleasure and the badness of displeasure. That’s what the objective facts are. And now we need to look into which attitudes represent things as distinctively, morally charged. And in which attitudes we apply moral concepts, because we’re looking for the objectivity of morality here, not the objectivity of painfulness.
The objectivity of painfulness I’ve given up on. If the stove is objectively painful. I’m not going to say that. Some alien will touch the stove and then feel great pleasure. Okay. Yeah. That’s, what’s going to happen there. There’s no objectivity to be found about what is painful as far as what external thing is pain causing, where you can find it is something like “pain is bad”.
It’s pain as a kind of displeasure. It’s not pain unless there’s some displeasure in it. And at least defined that way, pain is bad. Displeasure is the bad thing. And our judgments about displeasure well those aren’t really pain feelings about it.
Displeasure is bad, I think is not really a content of a tactile judgment about the stove, which is really what I’m having when I’m getting the “stove bad”, “stove painful”, a stove causes pain, all those things. That’s what I’m getting there with stove judgements, but really what I want here is pain judgments. And where am I making those? That’s really to be found in my hope for a certain future where there is less and my horror of a future where there is more.
Universality
Gus: Okay. We have a metaethics that can ground hedonistic utilitarianism. So, what are the best arguments against this view?
Neil: Ah, let’s see. Great. Let me think about this. It’s been a while since I’ve been asked to do this, because I’ve been fighting for the positive view for awhile and I haven’t really brought up.
Gus: Yeah. Yeah.
Neil: I’ll tell you where the most novel stuff is in this view. And that’s a place where I don’t know what the counterarguments are, but there are going to be some, because I’ve done something really new here in setting up this view.
And I invite you to come up with the best counterargument to it because that’s where it’s going to be easiest to attack. This is in how I’m seeing the accuracy conditions of our moral feelings. So what makes hope accurate? What makes horror accurate? In answering those questions, we discover what is good and what is bad.
I think what makes horror accurate is the horrible, the really bad and what makes hope accurate is the thing to hope for, the good. So yeah, that’s what we’re trying to figure out here. So how does accuracy work here? My defense of hedonic utilitarianism that I’ve given you so far is dependent on a certain kind of claim about the accuracy conditions of hope and horror, and really of pleasure and displeasure, the fundamental things that give them their moral nature.
And what I want to say there is that what makes them accurate is qualitative identity with the thing they represent. It’s just matching the thing they represent. As, one golf ball matches another golf ball.
Can a bit of pleasure match another bit of pleasure? If they’re just experiences, if we see them that way, experientially, they can be the same. If it’s just a good feeling, matching a good feeling. Yeah. You can get identity between those things at the level of experience. Between those, and maybe you don’t get perfect identity, but you get near matches that are enough for a high degree of accuracy.
So accuracy works that way. It’s a gradable notion you can be more or less accurate. Perfect accuracy is rare, but enough accuracy is usually enough.
Yeah. And that’s still, if you’re trying to accurately represent what one golf ball looks like, can you give me another, you’ve given me a great representation of how the other one looks. So ,accuracy we’ll find it in all kinds of places in more or less ways. You just don’t want the thing where you’re feeling the “down” thing about the “up” feeling, being horrified by children, enjoying ice cream as Leon Kass was, that guy. That’s a vice right there, if you’re horrified by that.
Those kinds of matches and mismatches, are what I’m taking as accuracy. And the idea here is that, yeah, this kind of identity, same thing on both sides, is what accuracy is.
Why do I think this? This is a really novel claim, really, as far as I can tell. A guy named Colin Marshall has told me that a Schopenhauer had an interesting view along these lines. Adam Smith’s theory of empathy matches these things, but it doesn’t quite do what I’m doing with moral judgment. There’s some predecessors, but there’s hardly anything that brings this in ethics. Maybe there’s some Buddhist or Mohist or [uninintelligeble] philosopher in India or somebody in Cyrene back in 300 BC or something who had this, but I’ve never seen it. That this match is, qualitative identity, is really what it is to be for accuracy.
Why do I have this? Here’s the neat thing about it. It gives you universality, all metaphysically possible minds that have a pleasure response to something and that demand of it, universality and objectivity. There has to be something real in the world that would make my judgment accurate and make the judgment of every metaphysically possible being who feels as I do accurate. All minds that feel as I do must be accurate, all metaphysically possible minds that feel as I do must be accurate here.
If we’re talking about an experience of pleasure and the experience is qualitatively identical, with pleasure in that way. Pleasure to pleasure match. Everyone who feels the pleasure. It’s the same thing as pleasure. And any metaphysically possible mind will be accurate to the pleasure of the world, because it’s just the same thing.
So you can get universality and objectivity out of this. And that’s the argument for, it has to be identity that constructs, the match. Only identity, as far as I know, that’s the that’s the simplest way to get universal accuracy. You can of course be a non-naturalist and build up a whole bunch of complex relations that apply to all possible minds.
That don’t really deal with the content of the internal sensation. Just say, “hey, there’s a non-natural fact” that, this complicated thing where there is a whole bunch of complex stuff going on. That’s what makes our feelings accurate. And non-naturalism can be reconstructed there. You might get a theory that better matches your intuitions and say, yeah, that’s right for all metaphysically possible minds, you can build that up.
But there’s just nothing empirically to suggest that’s really how it is unless you take our moral intuitions really seriously. And again, I’m the kind of person who thinks human beings are thinking genocide is right, pretty often. I am not taking these intuitions as something to build a giant non-naturalist, metaphysical structure on top of when we are, throughout so much of our history, thinking that killing each other in horrible ways is a great thing to do that you should be proud of, that you ought to do because the heroic guy who leads your people commanded or something like that, or God says. What are people even doing?
A God that commanded that would be an evil God, it’s just, it’s a disaster. So I am not taking these seriously enough to build a giant metaphysics on them. Mathematics. I see why people have a case. There’s a real case there that those things are corresponding to something pretty amazing because there’s just a lot more agreement.
There’s a guy named Justin Clarke Doane, in metaethics, who denies this, who thinks its a lot more similar, but I don’t think he’s taken a serious empirical look at the frequency of pro-genocide views in human history. Really? You need to do this empirically and look at how often humans are getting it wrong and they are messing it up.
They are getting pro-genocide views. They’re just, it’s disastrous. So yeah, don’t build a giant metaphysical picture on that. Do the simple thing. Do accuracy.
How many concepts?
Gus: I agree that we cannot trust our intuitions. And maybe as a, let’s say critique of your view, I’ll give you my own take on the metaethics of hedonistic utilitarianism. And I should say that this is not original to me. This is a view developed by Sharon Hewitt Rawlette that I have extended, let’s say. So, in what sense is pain bad? Badness, the concept of intrinsic badness or the concept of intrinsic goodness is learned by experience. So when we feel pain, this is the content of the concept of intrinsic badness.
And the question is then so how can we say that pain is badness? When two concepts point at the same thing, then we know that they, that this is the identity-relation we’re looking for. And so I see all of the, all other moral concepts as I see a secondary. So whether an action is right, whether a person is virtuous, whether an institution is just, must be built up from this very basic fact that pain is badness and pleasure his goodness.
And so maybe we are about as close as we can get to agreeing about these things, but maybe the biggest disagreement sometimes arise from people who are very much in agreement. And I would urge you to take an even more, even simpler, even more kind of flat-footed view and just have this one central fact that say valence is identical to intrinsic value.
Neil: I accept that view. That valence is identical to intrinsic value. Yes. Now the question here to go beyond that is, what is the content of moral judgment, because I want to end up where Sharon Hewitt Rawlette ends up. Sharon, if you’re watching. Hi, you’re awesome. I like your work. You have a big role in the paper where I’m talking about this. I need to send you that paper, Gus, I need to send her this paper.
We emailed 10 years ago when we were realizing that we had similar views and it was just great. And I need to show her where I’ve ended up because I’ve seen what she has done. Yeah. It’s nice. Yeah, but anyway to get to that, this is actually where I make some changes on top of the of Sharon’s framework, because I think the story about the moral concepts needs to be developed a little bit further.
It’s not so much that I’m well at a certain level. I don’t think I’m an analytic naturalist of the kind she is. But I have a way within synthetic naturalism to get the “Gus and Sharon”- view to come out. I think. And the way you were talking about it made it seem more synthetic that there were two concepts pointing at the same thing which is the way I like it.
I guess if the two concepts are related enough, it could be analytic that you can analyze one into the other, and then they pointed the same thing, but I want to do it in a way that at least leaves open the possibility of a synthetic naturalism because I think there’s Open Question Argument problems.
If you proceed from what Sharon is doing and what seems like the most straightforward way. So I’m trying to build a nice way to get around that and solve those problems for her really, because it ends up going in a way that she generally suggests. So what’s going on here is, and let me ask you Gus because you’ve presented it as your own view, what would you say when I’m judging:
″ this action is wrong”. Do you have a deeper way of analyzing that wrong- judgment? So I believe this action is wrong. Is there anything so some people would say wrong means there is an objective or categorical reason not to do the action. I say it’s something, it’s an action that you should feel displeased about and feel a feeling like guilt, if it’s your action or anger, if it’s someone else’s.
But anyway, it’s an action to have an unpleasant feeling about. Do you have a story about what wrong is analyzed as?
Gus: Yeah, I would analyze it as, this action will not maximize the balance of pleasure over pain in the rest of the lifetime of the universe. So, that is an incredibly difficult thing to know whether an action is wrong or right. But I see it as an enormous empirical investigation. So when you say “murder is wrong”, this means that it will cause much more pain than pleasure. And this is an example of how I believe that all moral concepts can be built up from the basics of pain’s badness and pleasure’s goodness.
Neil: Okay. Yes. Yeah. I agree with you on what moral terms refer to in the end. They, in the end refer to pleasure and displeasure in the end and arrangements of pleasure and displeasure in reality the action related ones do refer to arrangements of pleasure and displeasure in relation to an action where your action caused lots of pain.
So that’s when your action is wrong. So I think we are in agreement as far as I can tell on the reference of these terms. But the question I wanted to ask you was at the level between the reference, the sense or the concept or the meaning in between. For a lot of history, people understood at some level what water meant, but they didn’t know it was was H2O.
This is very useful as an example for a naturalistic moral realists like myself, because if you understand what’s going on that way, you can understand why alchemists couldn’t just consult the dictionary to get out of their errors. They had to do some actual experiments and discover something to do this properly.
So if we make it something that is at the level of sense or meaning or concepts, analyzable, where you can figure out the truth, there arises this question of why didn’t we figure out the moral truth earlier? Why are so many people mistaken? If you can just analyze the concept and figure out the normative ethical truths. And this was the problem, that G.E. Moore posed to all the utilitarians before him Bentham and Mill and so on.
And we’ve been doing 120 years of metaethics since then trying to deal with this. My way of dealing with it is that what the concept wrong is to be analyzed as is action, to be displeased about. And then even the person who rejects hedonic utilitarianism, isn’t making a conceptual mistake. They aren’t doing something where they could just look it up in the dictionary or analyze their concept better and find the truth.
They’re doing something well, they’re in a situation like the alchemist. There’s something big that has to happen. And it’s not just going to be reflecting on your concepts. It’s not just going to be consulting the dictionary to find the answer. Now there’s people like Frank Jackson who say really analyzing your concepts is very hard.
So it could be that we have to do some really hard conceptual analysis. And maybe there’s a way that some fan of Jackson can flip my view over cleverly into that. But I don’t see it yet. As I see it, we aren’t going to get there on conceptual analysis alone. And so you can’t put hedonic utilitarianism into the content of the concept.
If you do, you’re going to just give an implausible account of this content of the concepts. It’s going to be, why didn’t people figure this out before, if it’s all there like “a sister is a female sibling” or something like that or, ” a square is a four-sided figure where all the sides are equal in length” that, why couldn’t people figure it out.
And this is the problem that pushed utilitarianism into abandon for the first half of the 20th century, as far as I can tell. It was still around. People still liked the view, but they went non-cognitivist or did something funny that was just often a different direction.
And then in the second half, people came up with other solutions to these problems and just ran off to other views. And now utilitarianism is basically nowhere except for a bunch of effective altruism, bring it back. And that’s the practical side where it is. I’m trying to give you a theory that will solve Moore’s problems and get, get us all the way back.
And the way to do that is understand the wrong misjudgment. Not as something that entails in the conceptual side, but it’s just okay, there’s an action to feel displeased about. And that’s compatible with “you should be displeased about lies because they are lies and nothing more”.
And a non-naturalist can still hold that on my view. That’s still a conceptual possibility. “You know why you should be displeased about lies?”. Because there’s a non-natural property of wrongness attached to the lie. That’s a conceptually possible view in the way that the alchemists view, it’s something new, there’s no contradiction in it.
The alchemist who thinks there’s just earth, water, fire, and air, and this stuff is a simple substance. That alchemist, is not falling into contradiction. It’s just an empirical mistake. And that’s how I want to see all the other normative ethicists who are just wrong, but not contradictory.
They’re just making that kind of mistake. What you’ve got to do is figure out what to do with that “this is the thing to be displeased about”- judgment. And that requires a little bit of empirical information now.
Gus: This is extremely interesting and I’m not, I don’t have a firm judgment about who’s right here. If it’s so simple that it’s just relating two concepts, why didn’t Plato figure this out 2000 years ago? Yeah. I believe that we are very easily confused about the associations we make between pleasurable experiences and the value we can project onto objects or institutions or people.
So for example, if I have a religious experience I might project some intrinsic goodness into the religious figures that I’m worshipping, whereas it is actually my pleasure that’s good.
So I see this as a series of very easy to make mistakes about projecting value out to the world, out into the world, as opposed to finding it in your experience, but maybe this talk about Moore’s Open Question Argument and analytic versus synthetic naturalism. It’s getting very nerdy and very philosophical, which is great.
Neil: Yeah. Let me say the way you’re doing it is the way Sharon does it. And I think there’s some possibility that somebody could show me that really my way is a path to your way. And if that can be shown, there’s a way to do it. Where you say really, in, in what I’ve told you, there’s a way to build it all the way out where all the arguments I’ve given for this being hedonism synthetically.
A lot of that stuff is actually analytic rather than synthetic, but it was just so weird that nobody, got all the way down to it. Maybe you could do that. And that’s the way to flip me over, into being an analytic naturalist like you and Sharon Hewitt Rawlette.
Reductionism and physicalism
Gus: We should talk about reductionism, because what we both want to do is to reduce ethical value to something that’s discovered empirically.
And yeah, I have this call it, mistaken youthful ambition, that morality could be a science. And I think you’re agreeing. And so do you agree that if we are to fully naturalize ethics, we must be physicalists about our experiences. So we must accept that in the end conscious experiences are physical states or brain states.
Neil: I’m neutral right now on the physicalism non-physicalism question. One of the reasons why is, it’s just really hard for me to understand exactly what physical means. If you define it in terms of contemporary physics, I’m pretty doubtful that the stuff of contemporary physics gives you a full reductive treatment of consciousness.
If somebody has a great reduction, I know that there’s a integrated information theorists and other types of people who are offering reductions of various kinds, but from what I’ve seen, I would need to know a lot more to weigh into those, but the smart people who have tried, who have read the stuff have left me feeling pessimistic about whether I’d find it there.
So my guess right now is with current science all the way up and down, we just don’t have the stuff to build the reduction. There is an explanatory gap still remaining. Now I’m not a pessimist about closing that explanatory gap at some point. And David Papineau, a philosopher who on one of the PhilPapers surveys, I just matched totally with David Papineau, who’s a physicalist about the mind, he has an argument that really physicalism does really well. “You should expect some kind of physicalism to win in the end”. And I’m like, okay, David, I can go that way.
And I’ll just go provisionally with you that way. What I am confident about that is very close to physicalism is that consciousness is within space time. This has been proven. Bertrand Russell proved this in the 1920s.
It’s a consequence of special relativity, basically, according to special relativity, if something is in time, it has to be in space because of the unified nature of space-time. And qualia are in time. Consciousness unfolds in time. You have a conscious experience at one time and then it goes away. And then there’s another one.
Now I have an argument. I’m preparing this. It’s writing this up as a bit hard, but I’ve managed to make Russell’s update or Russell’s conclusion from special relativity a bit more precise. And I think I can figure out where consciousness is. It has a spatial location, well mine is right here.
And yours is in your head. Because if you do the Einstein train things, if you’ve seen these with a special relativity, the way he illustrates them. There’s a argument I’m developing that if you basically do the Einstein train thing and have the train run between two people having qualia.
The timing situation, you can see how it works and you would get completely bonkers results that special relativity says you won’t get if consciousness is anywhere else, but in the head. You would get, in some frames of reference, if you’re moving fast enough towards somebody their conscious experience could happen before their brain state happens. If it’s a different location, it’s just a bizarre thing that special relativity says that is not going to happen. And maybe you can make it on like Leibniz’ Occasionalism or something really bizarre like that.
Even then it would just be, it would just be strange. Consciousness is here within space time. We should invoke it in our scientific theories without fear. It’s just another thing for which we don’t have a full reduction yet, but there’s plenty of those because reducing things is hard, so let’s just accept it’s here. Maybe worst case scenario, we have to invoke some new fundamental forces to deal with it. But if the thing is in space-time and is in a convenient location in space-time near what it causes, it’s not going to cause the disruptions to science that people worry about. Maybe we need new fundamental forces, one or two of them to deal with this.
So it doesn’t disrupt the causal order. It’s just pure data at some level, some data showed up that was unusually ontologically, robust. But you don’t use Ockham’s razor on data. That’s monstrous, don’t cut up the data, don’t simplify the data. Oh, you know what you can do then best thing to do is to cut away all the data. Then you can be so simple in your ontology. You’ll have nothing, but of course we don’t do that. We don’t accept “there is nothing”. That’s what you would do if you Ockham’s razor the data.
If consciousness is at some level appearing in our theories in some of them as data, which it will. It’s not the only data, but it is a thing that appears in some theories as data. In my own psychology. It’s basically like I had this experience. Why did this happen? That’s a question I could ask. It’s showing up as data there.
Don’t Ockham’s razor it, keep it in. Accept that it’s there and now build it into your theory of the world. Whether it’s reduceable to what we have on the table right now, or not, we don’t know, but it’s in space time, it’s causally structured, like things that play nicely. Just accept that consciousness is there. That qualia are there.
They’re in space time, they’re wired up to everything else, nicely. They don’t necessarily cause anything. So don’t worry about them messing up other sciences. The other sciences can proceed just fine. You just have all this other stuff that happens to be there. And really, it was only the behaviorists who started raising a giant human cry about this stuff.
And I’ve actually looked into the history of this, the behaviorists, as far as I can tell Karl Lashley is doing the devil’s work here. He is actually harming science in a really terrible way. He says in the early 1920s, there’s a debate between the psychologists. There’s this guy Fernberger, this other psychologist who says, “you know what we need to do, we need to split up psychology” .
We’ll have you behaviorists you get your thing, you get your own science of causing behavior. And we’re also going to have the science of consciousness where we figure out, like, why are conscious experiences happening? Let’s figure out what their physical structure is. And it’s okay if the the’re epiphenomenalist for that science.
And that’s what keeps peace between the two sciences. You behaviorists you figure out what causes behavior and maybe all this stuff is phenomenal and you never have to deal with it again, but we’ll have some consciousness people on the side dealing with that.
Lashley comes in and he says, we are not even allowing a science of consciousness. Nobody has the right to collect data on this. And here’s the deep reason why: “we behaviorists wanted to be physicalists”. And there could be data there that disrupts physicalism. Don’t collect it. And this is just terrible because the kind of physicalism I like is the valiant heroic physicalism that finds all the data and explains it, and takes the risk that maybe we can’t explain it, with the stuff we have and maybe we need more stuff, go out there and be a heroic physicalist and try your best to explain the difficult things.
And maybe it turns out some of them are illusions. Okay, that’ll come up. As we look at the data and we find out it’s badly collected or something, go and find stuff and explain it, don’t run away. And then say, we’re not going to collect this. You just lose a science of consciousness that you could have had that Fernburger wanted to have.
And I think Fernburger was right in this debate. There’s a science that’s missing here, a science of consciousness. And we need to go back to the people like Russel and Einstein, who would tell us the qualia you have to be in space-time. And then we can rebuild the science of consciousness where, it’s been just this missing patch in our set of sciences for a century now because of Lashley’s scientific crime.
Gus: I think that we have to reduce rather than eliminate consciousness. If we are to move forward. And I think there’s a, there could be a lot of interesting discoveries about consciousness. Because it’s so close to what we’re doing, and because it’s so valuable for us, we are missing a lot if we’re not taking consciousness seriously, as an object to be investigated.
Neil: Absolutely. And I think hedonic utilitarianism in particular has suffered from this, because if the value stuff of hedonic utilitarianism becomes ontologically questionable because it’s within consciousness and it’s we don’t know what that stuff is.
Nobody’s really sure how to deal with it. The sciences can’t touch it. This stuff exists in an ontological gray zone where respectable people aren’t willing to engage with it. And only unrespectable people are allowed to engage with it. People who are scientific renegades and philosophers who don’t have to always obey scientific rules.
So yeah, that’s just, what’s over there and you just get bad theories then, but if Fernburger had won I think we would have some great stuff going on right now. Consciousness wouldn’t be an area full of philosophers and people who don’t really like science that much. And and there are scientists, people do research on the neural correlates of consciousness, but that is so much smaller than I think it should be. And then it would be if Fernburger, rather than Lashley had won the debate a hundred years ago. But I don’t really know why Lashley won. It’s a mystery to me. Fernburger totally seems to be right.
And there were people on his side who were pushing for that, but behaviorism won and the damage he did with Ockham’s razor last to the present day.
From philosophy to science
Gus: What I hear glimpses of in what you’re saying is this view of philosophy as, a you could call it a playground before something graduates to, to become a science.
And so we’re figuring out what the basics of a field, figuring out the basics of a field, figuring out what we’re even investigating. And when something has been investigated in this way then it can move on and we can make a science out of it. Is that how you see things?
Neil: Very much, so very much. Your playground metaphor is one I’ll have to consider. The metaphor I’ve been thinking of is the mother of the sciences, because, that’s where all this came from. If you look back at what Newton calls his book, Mathematical Principles of Natural Philosophy. If you look what John Dalton calls the book, where he comes up with atomic theory, I just love this title, A New System of Chemical Philosophy.
I imagine if it were still called the chemical philosophy department. My dad got his PhD in chemistry. He was a synthetic organic chemist. Chemical philosophy. Can you imagine how many grants we philosophers would get if they were still calling it chemical philosophy, they’d be giving us grants and say, “make a new chemistry for us, please, can you make another one of those? Because that turned out really well”.
From what I understand, this is something that went wrong in the 1830s. Actually the 1830s, from what I’ve heard is the time when science gets something like it’s modern meaning, that excludes things like philosophy.
And that includes something like what we think of the sciences of today. Before that to be a scientist, you’d be a natural philosopher. And this was one of the areas, within which sciences were growing as the playground as you have it, some were playing around there and I guess the ones that made it off the playground became sciences and then no one remembers that they were back on the playground before, but you see it in the titles of the books, A New System of Chemical Philosophy, Mathematical Principles of Natural Philosophy.
And you will be fine. The foundations of two of our most successful scientific disciplines, probably the two most significant books in them are the one with atomic theory and the one Newton wrote.. So really we philosophers need to reclaim that, realize that this is what we do that contributes in a giant way to progress of human thought and then be like, yeah, we’re doing Newton and Dalton’s stuff.
Give us some grants.
Morality as a science
Gus: There is this project that I like of trying to make morality into a science. And this seems to have falling into disregard in philosophy. It seems to be regarded as naive or simplistic in a way, or maybe it’s a hope that can never be realized because we have discovered something about morality that makes it so it can never be a science.
I’ll try to explain what I mean here. When I say morality as a science, I mean that we’re trying to use all the disciplines we know of. We’re trying to use physics, biology, the brain sciences, the social sciences, economics, everything we know to inform this giant project of maximizing the good in the world.
I would like morality, a science of morality to be a unifying justification for what we’re doing in all of the sciences, what projects we’re researching. It’s a prioritization scheme for what we should investigate. Do you feel that there are great arguments or is it, is it more of a kind of a zeitgest that is against morality as a science currently?
Neil: Yeah. I’ll go with the temporary zeitgeist account of why people are opposed to this. And I think there is one kind of good reason why people are opposed to this, which is that a lot of people have done it badly. Attempts to turn morality into a science generally are a lot of them just, yeah, we’re not good. But yeah, there’ve been just a lot of things that didn’t work.
Really a lot of them did not take metaethics seriously. And I think that solving certain metaethical puzzles is really what you have to do. You don’t have to solve all the puzzles because some of the puzzles you have to show are misguided in some ways and anything that requires a categorical reasons for action or universal reasons for action. I want to say this isn’t a puzzle, we should even go in. Rather, what we’re looking for here is accuracy conditions for feeling or how we should feel, how to feel. That’s the correct way to understand this. I’ll try to do that at the conceptual level first. And then once we’ve gotten into that, I’m like, okay, I can give you a broadly scientific story about how these work once we’re looking for those.
So I think there are two ideas that you had there that I agree with both of, one of them is using all the sciences to figure out morality and the other is using morality, once we’ve figured it out, to prioritize what we do in the sciences.
So on the first one using all the sciences to figure out morality, that’s really what my treatment of philosophy as the mother of the all amounts to. It’s basically philosophy saying, “okay, kids, all of you help the new baby”. So right, of course this was going to happen. And as you figured out more things, you have more little helpers to help out the new babies.
I wish motherhood took the shape more often. I, as I understand it, kids more often just try to cause trouble for the new baby. Set that aside. We have very nice kids, very good sciences, and we can trust them to give us good results.
Psychology, we’ve had some problems with the replication crisis. That’s true, but Einstein, I trust Einstein to tell me what’s the shape of the world. The shape of the universe is. He figured that out. And really we have managed to verify what he told us in some pretty amazing ways. Cell phone GPS off of the theory of relativity, because, the way that works requires his, his theory.
We just have awesome theories and awesome results coming out of these theories. Let’s trust them. Let’s trust those theories to tell us this universe, we’re in an Einsteinian spacetime. That’s what this is. And one of the things you get when you get that, that really is, I think, metaethically important.
There’s some powerful anti-intuition stuff that Einstein got from Hume and that should inspire us in how we do metaethics today because Einstein himself, I think in a letter to Moritz Schlick of the Vienna Circle, I think it’s just Schlick some years after, I think, 10 years after, was it, after discovering relativity in 1905 Einstein writes...
It was empiricism about the way concepts are structured. It’s the way we get our concepts is from experience. So we get our concept of time from our experience of time, get our concept of space from our experience of space. And if that’s what it is then the concept of time could work in something like the way Einstein says it does where we get violations of the way that Kant says the things could work.
You could get such things as time travel. It’s, a, It’s a conceptual possibility in certain systems, whereas in the Kantian system with a time proceeds in one direction, a space has to be three-dimensional well, that kind of structure with a Euclidean geometry to space that’s supposed to be within at least the pure, I don’t know if it’s conceptual, but then you get the pure intuition. So it can be known apriori that space is Euclidean, for example. That can be known within the Kantian system. Within Humes system, it’s just hard to show how you could know that a priori. And Einstein is reading Hume figuring out, okay space being Euclidean is not a priori in this system.
I’m going ways in which space isn’t Euclidean. So that thing is something that Einstein, as far as we can tell a finds in Hume, just concepts of space and time that are not the Kantian ones. What this shows is that you can be very confident in content, intuitive structures to tell you what important things like space and time, and right and wrong are, and kind is going to tell you, this is fundamentally related to action.
We’re looking for a universal laws and the way the categorical imperative works, it’s supposed to be running on universal maxims that have a fundamentally action-directed structure. Could you will these things together? It’s very important to the structure of Kants theory that this is about action and it’s very important to the contemporary Kantians to Christine Korsgaard that this is coming up in reflection about what to do. Practical reason is where morality’s natural home is.
What I want to do here is a very Einstein-like move a very Hume driven move. What I’m looking at is the structure of our moral concepts. How are they actually built? And what Einstein had to go on here was some empirical data drove him, measurements of the speed of light, and those pushed him to seeing time and space in a very different way than Kant did.
The gay conversion therapy example could be my empirical data. It shows us that the practical output is just not showing up. So let’s move this to a more representational, perceptual way of looking at our feelings. Once we look at the concepts that way and go away from the way Kant wanted us to treat morality in terms fundamentally of action.
Once we look at this in terms of feeling, once we look at it in the more copy principle, empiricism way that Hume understood our concepts. Once we build up that way, I’m just doing Humean copy principle, empiricism on feelings. A feeling comes in the concept you build has some essential connection to that feeling that came in.
It’s that feeling, and you add a should onto it as well. There’s something normative there. I want to account for that. Okay. Feeling and should, that’s how you get a moral concept. And you can strip down feeling even further to the pleasure and displeasure that’s in it.
So I’m building up the concept Humes way. That’s how Einstein discovered the shape of the universe. Let’s do this again. Let’s do it again and see what we discover. And maybe there’s Einstein-size stuff at the end of this. So that’s what I want to do. So now here we are using the sciences, just like you’re saying Gus to figure out morality, and then once we figure out, okay, this is what the good is.
Then we can investigate the sciences that are the most helpful in pursuing the good maybe we can build something like, we have the good, now let’s do political science with the good in hand. We know pleasure is the good, okay. Let’s assume this confidently in political science and do the all-out utilitarian political science that Bentham would’ve dreamed of.
It’s there for us now.
Human motivation
Gus: Great. In general, you’re very inspired by Hume in your account of human motivation. So could you briefly explain the Humean theory of motivation for us?
Neil: Yeah. It’s something that I was going into a little bit earlier with the way that our moral judgment works, right?
So the idea of the human theory of motivation is, desire drives everything we do in terms of choosing the goals of action. So it motivates all of our actions. Whenever you act, you have a desire for some outcome, some end, and you have a belief that by taking the action. You can produce the outcome or the end and often taking the action has a bunch of steps between the action of the end.
And you have some belief about what those steps are. By pouring this water into this glass, I can quench my thirst and there’s a couple other steps involving doing things like this, but the end is quenching my thirst drinking in the water, something like that. And I do some things as means to that end.
Oh, what the Humean theory is trying to rule out is something where I have a belief about what’s good or right. And that belief plays the kind of role I assigned to desire in that explanation. The belief about what’s good or right says this outcome is good or this action is right. And that either drives actions that I believe will produce that outcome or makes me do that action, if the action itself is right.
So Hume is arguing against views where belief, or as he put it “reason” can motivate us. And the way I defend the Humean theory, I don’t just think it’s about “okay, we can’t have reason immediately driving reaction”. I think also you need to put on this, and this is where I disagree with Michael Smith who calls himself a Humean
I think the belief cannot generate a desire. Through reasoning from beliefs alone, you can’t have a bunch of beliefs and reason from your beliefs and end up with a desire. I think, you can imagine creatures that can do that. I don’t think that the psychology, Michael Smith suggests were beliefs about our reason to do something are moral beliefs.
Generating desires is like impossible as a psychology. You can imagine that they could be really psychologically powerful creatures. They would think, ah, I ought to work harder. I have a reason to work harder and they’d form a desire to work harder and they would get more work done than me.
But really we aren’t like that. And the gay conversion therapy case is really what I think shows it.
Gus: Imagine I go to a philosophy seminar with with Peter Singer and he convinces me through reason that factory farming is morally abhorrent. Isn’t that a case in which if I then stop eating meat, I am directly motivated by my recent beliefs as opposed to my desires.
Neil: We have to get into your head as you’ve listened to Peter Singer and figure out what exactly happened there, and really this needs to be treated with a great deal of psychological depth. We need to really explore how this is going on, how moral persuasion happens.
Now, the cases of moral persuasion that I was seeing anti-Humeans offer in the philosophical literature. When I looked at these, they were just leaving details in to make them true to life. Then when I looked closely at those details, it’s like, why is that detail there? If it’s all belief. So I’ll give you an example. There is a case that Steven Darwall has where this woman watches a film about workers being treated badly in a cotton mill or something in the Southern United States.
And then that experience gets her to become an activist trying to work for better conditions for the workers. Okay. If you take that process, so she sees the film, becomes an activist and does that, it seems to me like your Peter Singer case. She gets information and acts on the basis of information. Sounds pretty reason-based.
But one of the details that Darwall leaves in is that this woman, Roberta, feels an experience of shock and horror as she sees how the workers are being treated. And now there’s this question. Why does she have that emotional response? Why are shock and horror showing up at a time when we would think she hasn’t yet formed the decision to go there.
Now you can put the decision oddly early and say, oh, she had really decided early. Someone can be, I think, a natural way to see the case, and the way Darwall presents it is, people are just shocked and horrified by this and they feel that first.
And then they decide what to do. And even after that, if they’re shocked and horrified, and then they decide something must be done, you can to be shocked even before you decide something must be done. Even before you draw any practical inferences, relating to action, you can just watch it. It can be almost the way you’d watch a fictional, like film where something really bad happens to somebody and you feel bad for that person, but there’s nothing you can do because obviously it’s a fiction.
You can watch the documentary that way, feel that, and then later on think, can I do anything for those people? Oh, there might be something, so that’s a way, that’s the way Darwall presents this and now we’re trying to explain those feelings. A thing about belief is that belief on its own does not generate horror.
As far as I can tell, to be horrified, you need something like a desire first for the thing not to happen. So if, when Peter Singer tells you about what’s bad, in factory farms. If you have an unpleasant experience, as you think about what’s happening to the animals, which I think is how it usually is for people.
And that’s why our EA veggie vegan friends, try very often to give you cuddly animal pictures especially when they’re talking to ordinary people who are not philosophers. The way it ordinarily works is you give people that, now maybe if people are primed up in a very complex way that philosophers often get, or if they’re the kinds of unusual people who become philosophers or who become EAs.
I know that within our EA community, we have some people who are just a little bit different from the rest of the folks, and maybe they have something special going on where you have to talk to them in a special way. I don’t know. But with ordinary people, you give them cuddly animal pictures.
To get them to donate to global poverty causes, you give them the kids in Africa and India, and you make them cute. As far as I can tell, this is not a belief driven process. This is a, looking a lot more like processes that manipulate and enlist the help of desires. So you had some desires first. And what usually goes on in this persuasion is they’re reaching into your desires and grabbing something.
Now, I don’t think this has to necessarily undermine the rationality of the persuasion. It may be that the images are actually undoing something irrational, you were stuck in beforehand. It was just that you were, what was much more vivid to you was the luxury goods you could buy and these images of the animals or the kids, raised vividness of the other things so that now you can make a decision from equal vividness, which is better.
So this is not to say that something irrational is going on there. It psychologically might be engaging with things that are on the motivation /desire side in a way that’s much more Humean than a way where I get convinced first and the beliefs drive my action from then on.
There’s a guy named Josh May, who I think is the best anti-Humean out there. He has a book called Regard for Reason in the Moral Mind. I am not happy with a lot of anti-Humeans who just do a slipshod attempt at best to engage with the empirical work, but he really tries. And one thing that I think we’ve come to we’ve come to a sort of agreement on, or quasi-agreement, as far as I can tell, obviously he has his position, I have mine. But as far as I see it I want to go along with him on the idea that rational persuasion is much more frequent than say Jonathan Haidt or Josh Green or people like that were suggesting. Yeah, we can be rationally persuaded and it happens, of moral things. This really does happen.
However, that happens. I want to say with an underlying strongly Humean psychology where desire drives all action. And when we act morally, that’s going to be at some deep level desire-driven too, but that’s okay. That doesn’t undermine the rationality of things in general, just understand rationality, more the way Hume would. We’re going to be good, rational Humeans about this and form our judgments the way Einstein did. Simplest explanation of the data.
That’s the way you do it. Take in the experience, build the simplest explanation. Einstein says that’s the supreme goal of all theory. And that is the rational process. That scientific rational process is the one that leads us to the moral truth.
Alienation from our values
Gus: There’s this problem of feeling alienated from our own values. There is a problem of this feeling of alienation from our own values, especially among utilitarians or effective altruists. Especially if you have this system that places high demands on you. Our values can feel distant. We can feel like we’re being almost oppressed by what we have to do, what we should do. And do you have any advice for dealing with this?
Neil: It may be too late for many who feel that conflict because that problem, I do think just is something that happens, that comes up from within your desires at a certain level.
Now, suppose you were an utterly pure of heart utilitarian, you believe the theory and all your desires and emotions are in line with the theory. I don’t think we have many people like that. Psychologically, it’s hard for a human being to be that way. I think we can make ourselves slowly more and more that way.
And it’s really good for some people to do that. Maybe not everybody who knows it, everybody should, but I want some ninjas. I want some utterly single-minded, pure utilitarians who will just go around the world and end all the existential risks. We have some of those people and they will save us from monsters.
So yeah, you can try to build yourself up that way. It’s hard, but people can try to do it over time. People who come to the theory young have an easier time of this because they can over time set themselves up psychologically and take courses in life where disruptions don’t happen.
I’ve been lucky that way. Coming into philosophy and as a utilitarian, this is what I did. I was an undergraduate. I was 18 years old, a freshman at Harvard. And I was, it occurred to me that this phenomenal introspection way of figuring out that pleasure was good. That was, yeah. That has to be the way you do it.
And it was just like the same way that a young chess player will be. Like, “I have checkmate here. I know there’s mate in maybe seven or something like that, but I have it”. And you just throw all the pieces at it and everyone thinks that’s crazy. Why are you throwing all your pieces away? Now I’m 41 and I’ve thrown a lot of pieces at this.
And I’m like, now it’s not mate in seven. It’s mate in three. And I think I’ve got this. It’s just coming closer and closer, that’s how this goes. And then if you do it that way, you have time to build up your life in such a way that you don’t get all the contrary things that would pull you away from the theory.
If you have children, you’re going to have this kind of conflict between caring for one or a few, your children. And then this theory that you accept that tells you to do something else. And that conflict is just really hard to resolve in your own life because you love your children.
Utilitarians who have kids, they really do have to figure out how they’re going to do this. And I wish them the best. My, my situation is easier. I don’t have kids. I can be pretty single-minded, pretty focused. And I’ve had plenty of time to try to build myself up as a person in a way that made me even more single-minded and focused, and yeah, you can do that. That’s really at bottom, if your motivations are separate, you will be alienated. You’ll have a lot of motivations and you’ll think about one motivation of yours that goes in a different direction. You’ll have conflict. And that conflict, I think just is a kind of alienation, or an alienation emerges from it that I can’t tell you how to get away from, if you feel it.
Christine Korsgaard in Sources of Normativity has this idea that a rational deliberation could unify you. If your desires are not unified, there’s a way that you could rationally be like, I have settled on this. And then everything falls back together. And I don’t think that actually works.
I don’t think that human beings can do that. They can do it in some situations where they find a solution and they bring the things together. In some real resolution between them. They find a way to pursue everything just about enough. And they’re happy. But there’s times when there’s nothing rational deliberation can do, and you’re going to lose a part of yourself and it’s always going to be screaming at you for not, following it, if you have, children or something, and it’s a choice between the utilitarian path and you really believe utilitarianism is the right moral theory and your children.
If the more severe that becomes the more bad it could be and I don’t think there’s necessarily a way out for people who, if that really becomes a dilemma.
Children as a utilitarian
Gus: This is a tricky issue, the issue of having children as a utilitarian. I would worry about what I consider some of the most ethical people on earth choosing not to have children. And then maybe they’re not getting them. They’re not furthering these values or, I am skeptical about the case against having children. I must say.
Neil: We should talk about that because I have the “it’s just fine for utilitarians to not procreate kind of thing”. Because I think our most effective way to make people better is not through the incredibly expensive and difficult process of raising them ourselves. The grow your own approach is, it just doesn’t scale, and we need things that scale. So the better way to do it is the way that Peter Singer actually made a lot of utilitarians.
He didn’t do it by giving, by impregnating somebody who gave birth to them. He wrote a bunch of books and I’m doing that. I think we can do it that way successfully. We just go out and make good arguments. And really, if there were, if the problem was, there were a lot of great arguments out there, the theory is, has been decisively argued for, we just need to breed some people who will believe, then it would be a different situation.
A lot of the things we have to do, we have to just, do those things. And then maybe once those are done, there’ll be time to do this, but we just have so many things to do first. And I really think that the “grow your own” approach is inefficient. It doesn’t scale. It’s not going to be a good way to throw our resources at things.
A better way would be, I think, we’d do a lot better if we go to the communities that we’re helping with our global poverty aid, and just be like, “hey by the way, this is us and our theory that’s been distributing the de-worming pills, the bed nets here”. If you’re interested, we can build some education on top of this for you and pipe you into our system as future utilitarians, who’ve been helped.
I really look forward to those people. I want to see the future utilitarians who came up from “yeah, I’m here because of the bed nets. Hello”.
Gus: It’s an absolutely beautiful thought to think of a future effective altruist that was once saved by the efforts of effective altruism.
Neil: They are coming. I promise you, they are on the way. I’m in that quasi-position myself, because goodness knows if I’d be here, if not for a piece of American foreign aid. My parents come from a tiny village in rural India. And there was some grain assistance at a certain point, I think during the sixties or seventies, late sixties, early seventies, there were various crises in the part of India.
Some of this is Peter Singer’s stuff because we’re in West Bengali, which is close to Bangladesh. So I speak the same language as the Bangladeshis that Peter Singer is telling everyone to help. So you got the same blood here and maybe someone’s aid helped, though it wasn’t specifically utilitarians.
But my dad was telling me sacks of grain at one point came in with the American flag on them. And he learned to love America, partly from, “hey, these are the people who feed you when you’re hungry”. That’s great. That’s awesome ways to buy goodwill internationally. For a sack of grain, you get my dad on your side and he comes to the U S and becomes a pharmaceutical chemist. Wonderful.
EAs can do this. And I think that’s going to be so sweet when people like my dad start showing up.
Gus: With regards to the point about children. I agree that it’s more effective to convince people than to create people. I would worry about the marketing of effective altruism or utilitarianism, if it’s widely known that to join this community, you must abstain from having children. I think this eliminates a broad chunk of the population. Given the choice between accepting these values or having children, then the values will be rejected because if something is an innate drive, it’s the desire to have children.
Neil: Right. Yeah. I see that. And once the movement gets big enough, obviously you just can’t like, this couldn’t be a requirement. I would never want it to be a requirement on people in the movement or something like that. That’s just, it just loses people pointlessly. What I do think is, when I talked about ninjas, not everybody who’s doing the good has to be a ninja, but some people think it would be awesome to become a ninja.
The whole idea of it is that you need special people to do these jobs. And right now the jobs that utilitarians need done, there’s some select jobs to prevent existential risk. You need to deal with biotechnology policy so that we don’t get even worse pandemics than we have now that actually can kill the entire human race. Which you could get if someone gets their own “design your own virus kit”, and decided to design the most nasty virus that they ever could. Let’s take COVID and build in some more nasty stuff into the RNA, that’ll totally smash anybody it goes into, yeah. So make sure that nobody does that.
So we have people, I have met some people who are trying to do this, get into the right places in governments to just shut down the “design your own humanity-destroying virus”. Let’s just get that kit to never happen. And there are people working on the project of making sure it never happens. So those are the people.
Once you build up your hero teams, there’s going to be a supporting infrastructure in any proper society that can support this, of ordinary people going about their ordinary lives. And occasionally meeting one of your heroes and being like, “oh, thanks hero for helping out”. And the heroes like, “hey, glad to help” and just rushes off spending his or her or their entire life, doing this kind of task.
That’s in the end where we want to go, but we just need some people right now. There’ll be room for all types once we build this up. I don’t think in the end this is something that just becomes, nobody has children. It’s just while we’re trying to, while we have a small number of people to work with, we do it this way.
Hey, if you want a comment at a different level and there’s some people, I think Jeff Kaufman and Julia Wise, who are EAs, who are doing a good job of just showing you and they, I think have their own sort of giant useful mission. Showing you how to live the EA life, from the parent perspective, to show you that it can be done.
Motivation and community
Gus: The question of integrating our moral philosophy into our lives is maybe the most important, because otherwise what we’re doing when we’re discussing metaethics, it’s just, you could call it a form of a live action role playing if we’re not actually implementing it into our action.
So how do you think about the importance of community here? Because this is, this seems to me to be one of the really important motivators for humans or for people, to be a part of a community with some values in common.
Neil: Right, so I don’t have a general answer to the question of community because, how to develop that community, obviously it depends on what your initial situation is, where you need to go from there. And in some places the EA community functions as a community. And when I go to the Bay Area, when I see some of my friends in the Oxford area, it’s almost like a small religious community in some ways.
And those do succeed sometimes, and they just have to figure out how to do it and do their thing. Yeah, so there’s that and you can build that. You can do that, and we’re going to need communities of other kinds too. We’re going to need EAs integrated into larger communities of non-EAs because very often that’s how it has to be.
And so we’ll just get all kinds of solutions to these problems as people think about it and their specific situation. But on the community and “is metaethics just LARPing” kinds of issues. Look, I think the way that a lot of people do metaethics, it might just be. I can’t answer for the field in general. But the way I came to it, it did come up from something that was very embedded in the world.
The reason that I got so worried about these questions about, can we scientifically prove that something is right and wrong or good and evil, was that I came up against one somewhat genocide-ish guy in my upbringing. That Senator Jesse Helms from North Carolina who was the Senator of the state I lived in from age eight, all the way to end of college. When I come back from college to where my parents live in North Carolina. Here’s the genocide-like thing that he accepted. He was, he was pretty sanguine about AIDS, about the AIDS crisis, because it killed gay people. That was something Jesse Helms really did not mind, he just saw the end of them as something to be wanted.
And that view, if it’s not full on genocide is in that direction. It’s, there’s an entire population that he just wants to see die. And it’s not like he, I don’t think he’d go to actually acting for this, but “hey if God’s taken care of it, why interfere with the hand of the Lord” is the attitude that I was seeing from him.
And yeah, this stuff is out there. He was pushing against funding to deal with the disease for a very long time throughout the nineties, even it’s just at some point he flipped over a little bit on it would allow it, but that was way long into the pandemic. Just, people had died in large numbers while he was blocking funding.
We had that. He was just racially terrible. He was a, he was a segregationist from the Martin Luther King days and had disliked Martin Luther King’s attempts to establish integration of the South. So like racial hatred was just deep in this guy.. And I saw him defeat a black guy who had been the mayor of the biggest city in the state, a highly respected former architect twice in Senate races in 1990 in 1996.
I was 10 and 16 years old at that time. And that was my introduction to the fact that, yeah, evil was a foot here and people love it. People think it’s right. People think it’s the objective moral truth, and we see them now in the South again, with certain kinds of views that could lead America to cease being a democracy, if they succeed at what they’re doing.
Gus: So this might be a good point or point of times to remind ourselves of the dangers of having a community with shared values. If these values are wrong, it can be extremely motivating for people in both, whether or not the values are right. It can be deeply motivating for people. So if your religious community teaches you about the dangers of homosexuality then you might be extremely motivated to do something about it.
And this is of course, something that effective altruists have to look out for too. Avoiding dogmatism, I would say is, should be a main focus, remaining open to criticism.
Neil: I do think that the movement gets like, dogmatists are more likely to be loud about their dogmatism than non-dogmatists are to be loud about their non-dogmatism. So when you look at discourse, you naturally will overestimate the amount of dogmatism and I’m pretty, I’m happy with the community.
I think there’s enough going on that we’ll have a bunch of people who can, and the community is diverse and it’s all over the place. These there’s the Bay Area people, the Oxford people, a bunch of scattered people. We have chapters in Singapore and Hong Kong that I’ve seen, and there’s a bunch of different perspectives that come from their different local environments.
So I feel like the community is being, it’s becoming a network of different opinions. It will prevent all that bad “we’re in this community and we have one view” stuff from going on.
Giving What We Can
Gus: You’re a member of Giving What We Can. What is this, how did you make the choice to become a member or how has your experience of being a member been?
Neil: Yeah. There, it was about 10 years ago. I was already a utilitarian and this philosopher named Rachel Brown at Australia. She’s in Australia. She’s I think teaching at Australian National University now told me, “there’s these people, these effective altruists who are like you and they’ve started a thing and maybe you should join them”.
And I looked at it, I was like, that’s the kind of thing I’m going to be part of. And so I took the pledge and got in and it was just like, “hey I’m a utilitarian, and these people are being good, utilitarians, just let me join them”. And, oh, you’re going to find me some charities to donate to. Okay I have money now because I just had gotten the job in Singapore in 2008.
And I had thought to myself, I want to keep giving away a quarter of my money every year of my salary. And I’ve managed to pretty much stick to that, between charitable and US political causes. Basically I try to hit the pledge over 10% on global poverty stuff, global poverty and animals. And the rest of the quarter is usually US politics. And yeah, that’s that package, yeah. Has a roughly, I wouldn’t say always hit 25%, but I’m 20-25 is a pretty good estimate for me, usually.
Gus: And this hasn’t been disruptive in your life? How has it been easy for you to stick to this?
Neil: Really easy. I didn’t really go in on the hedonic treadmill very much. When I moved up from being a grad student to being a, an assistant professor I only made small moves. That’s my advice to anybody who wants to, be happy with just a little, do it that way. And it’ll work out.
And if you’re, if you’ve got the, if you’ve got the successful career going just live beneath your means at every stage, and I’ve always lived like one career rank below my means, so here I’m an associate professor, who’s bought some nice things and lives like an assistant professor.
I’m feeling all right, or maybe a post-doc even I may even be two levels. So yeah, just keep doing that and you can be happy and not having kids, but also, that’s huge. And when I realized just, okay, I had thought about it. It’s not like I’m personally opposed to the idea.
But, I realized that to do that either I or a woman I was in a relationship with would have to make huge career sacrifices and probably both of us and I just didn’t see how that was going to end up being something that, I could count on happening in a good way. So really the thought was for myself, if somehow that falls into place, go for it, but don’t expect it.
And just go at philosophy because you’re doing well with that. And the relationship stuff I can’t be confident about. Now things are going well. And I have a little cat meowing here because a nice lady has brought me a cat and she’s here too. But we’ll see what happens anyway.
I, that’s how I got to where I am now. And I’ve been, it’s worked out easy for me because I was well set for it to be easy ever since I was 18, I did sort of one track, let me do my do philosophy. Prove utilitarianism, it was all that kind of stuff. It was, I was just on my own mission.
I was running out there with my Katana blade, hiding when I had to hide and slaying things when I had to slay things.
Political action for utilitarians
Gus: Let’s talk about how we should do politics if we are generally utilitarian or at least if we are effective altruists.
So one point to note, before we dig into the system, it’s to say that EA or the effective altruist movement has stayed relatively apolitical. Has tried to stay out of the most murky and most controversial matters. And I think this has been a good thing because the, we want this movement to be inclusive of people with different political views.
We want it to be serious in a way that’s not in the mud of the political issue of the day. And so there’s that, and that’s one thing. The other thing is just the importance of politics for all of the things that effective altruists want to achieve. And so how do we think about the value of staying somewhat apolitical?
Neil: So there is a lot of value in it that I see as strategic value because once you are political, you have enemies and being apolitical avoids enemies. So there’s definitely a lot to be gained by that. It does keep the movement open to lots of different people and that’s good because we just want more people doing the “donate money to global poverty preventing causes” type of thing.
And if we get people of all different views, if we get libertarians doing it because they’re like “this kind of charity is awesome” and they’re they have a special drive to show the power of private donation as opposed to government activity. Let them have their fun in the best possible way.
I don’t think that their ideas about how to set up society are any good. They can’t build a sewer system because that requires government over the top in a way that I don’t, have never seen a libertarian theory that can deal with properly.
But hey look, we’re not talking with them about that right now. We’re talking about with them about global poverty, and yeah. Okay, go ahead. Send the money and I’ll send mine too and we’ll shake hands.
There are places where I think as far as actual goals EA does want some things to happen. And the number one thing that I think EA would like to happen, the effective altruist movement would like, that would just be good for EA goals, is something that can globally prevent dangerous technology from killing us all. All the, what would kill us all threats in some way involves or most of them, I actually, asteroids could knock us out. But for the most part, a whole bunch of the big ones are in the near term, we develop some technology and it smashes us. AI is an example. Pandemics, normally humanity has been through a lot of those, but if someone really manages to up the viruses game, if they get stronger with a distinctively, intentional human help, because somebody wants to kill everybody, that’s dangerous.
I especially worry. This is one of the things that has not so much come up. There’s a version of AI risk, but military AIs are the scariest AIs to me because they are ones that are, they’re not going to have the right safety things attached to them, probably because they need to be built in secret.
Or often they will be, it’s a secret project to build like military technology. You don’t want anyone else copying that. So there is, there, there might be too few eyes looking on this and they might be unsafe. They might have much closer and easier access to weapons systems. And if some country that, some North Korea like country decides, okay, our ticket to power is building a military AI.
And yeah, some really bad stuff could happen.
Gus: Yeah, we should mention briefly why we should expect these risks of extinction to be human caused rather than something that arises from nature, like volcanic activity or asteroids. And in general, it’s the reason is that we can look at how long humanity has survived so far.
And then we can say, that well, if we’ve survived, let’s say 200,000 years, the probability of natural extinction per year must be pretty low that. The stuff that’s about to change, is the human-generated risks such as you mentioned, engineered pandemics, AI risk, maybe a great power war, a nuclear war. These things.
This is what you suggest as the kind of common political project for effective altruists, whatever their kind of politics in the everyday political domain is.
Neil: Yes. Yeah. That is the project that I think the movement has rightly seized on and people have their own assessments of which technology is the most dangerous one, but something along those lines is the big issue.
I would add anything connected to a military, to the list of things, because nuclear weapons were the classic one, and those are still out there. Those are still all over the place. So I worry about those because, if an AI managed to hack its way into those or something like that’s that just gives it an easy way to cause the destruction there might, if you just have these things around it’s a source of trouble.
World government
Neil: Yeah, there’s just a lot of technological danger. What do you do about technological danger? For mitigating technological risks that could kill everybody. You need everybody who is in control of a dangerous technology to well, be sensitive to the consequences of the risk. And the kind of actor who does kill everybody, I imagine is a North Korea type actor who really feels like, there is some risk of us dying too, but overall in the grand strategic calculus, we are the people who will take a one in a hundred risk of absolutely everybody dies to get a...
One in a hundred, everybody dies. 50, 49% chance, the project goes nowhere. 50% chance, we get a significant strategic advantage. I think North Korea goes for it then, and if you get, you can play that, if you play the game a hundred times the chances are pretty good for the end of the world.
Yeah, let’s, you probably don’t end up playing that game a hundred times. It probably maybe time number 45, no more play. If a lot of people are playing that game, it’s over. It could also be, corporations trying to do something to maximize their profits that, has a giant possible cost.
But hey look, if there’s that kind of riches on the line, you’ll take a one in a thousand chance of you dying, yeah. So I don’t want people taking risks on behalf of all of humanity that they personally profit from. And we have a way to stop collective action problems involving that. This is just what regulation and government are there for.
If people are doing collect or basically in collective action problems, you regulate your way. This is how you build a sewer system. You basically, we’re in a collective action problem because we’re all creating waste. That is infectious. We also all have to put our money together to build the pipes and we, we deal with that and now cities are livable.
I actually found out before 1900, before about 1900, cities were net population losers because people would die to all the disease and all the filth and all the waste there. Jared Diamond just has this throwaway observation In Guns, Germs and Steel about that, that, yeah. And then you build sanitation and now cities don’t need to be replenished from the outside.
So it has just an example. I tell my PPE students this, about solve collective action problems with government. Now we have a collective action problem by keeping humanity alive. So how do we solve that? Get a universal regulator over the top of everything to make sure that nobody is making these gambles.
One thing people worry about here with the universal regulator is, will the universal regulator be an authoritarian. That’s a major worry that a lot of people have. There’s so much fiction about, evil people trying to control the world. We did have something that was like an empire that was trying to be the authoritarian world government.
A couple of times in the past century, Nazi Germany, Imperial Japan, USSR, there’s a bunch of entities that were going for that. So yeah, those bad guys are out there. But one funny thing about those bad guys, here’s why just don’t take that threat very seriously at all.
I don’t think authoritarian world government is a serious threat. It’s because of the nuclear era. So try to imagine where the capital over your authoritarian world government is, where is it going to be? Who’s going to be in charge of this thing. China wants it. Russia want theirs to be the capital.
Various Western powers might want it, but how do they get everyone else to agree when everyone else has their own giant supply of world ending nukes. It’s really hard for anyone in a nuclear world to take over, unless they’re going to do some crazy gambles North Korea style, okay. We’ll keep those gamblers out of the game. We need to do that, but really nobody else.
And we found this in the cold war, the US and the USSR could stare each other down for decades and nobody’s going to go at it with these nukes and nobody’s even going to conventionally invade each other. They do proxy wars, proxy wars are nasty, but they do some nasty proxy wars tear up Africa, tear up Latin America, Southeast Asia.
You gotta be lucky and smart to get away from the proxy wars. It’s just, yeah, but really they don’t invade each other’s home territory and they know that if they do that, there’s going to be hell to pay. Nobody wants to do that and possibly trigger the end of the world with all these nukes around. That dynamic is one that I think prevents anyone from becoming an authoritarian ruler who really imposes their will on others because all the others have nukes at this point, or enough of them do that you just cannot get a unified authoritarian control of the world.
And looking at what the authoritarians will want. They’re heavily nationalistic in general, it’s Putin, it’s Xi Jinping. It’s a lot of them want their own power center. And if they’re ethnic nationalists authoritarians like Xi Jinping, and Vladimir Putin, they can’t play well together.
″ Russian nationalism rules the world” and “Chinese nationalism rules the world” are two separate end games that you have to play out against each other and they can have a cold war. That’s probably what happened between them.
The axis powers probably would have a really grim cold war against each other in the end. Cause why really do Hitler and Hirohito want to divide the world between themselves, they can make up fake racial theories that say “we are all one blood Germans and Japanese”. They have to do it for an alliance and nonsense like that goes on when you have to do it for an alliance, but then the alliance goes away because really you want Munich or Tokyo, one or the other, not both.
Gus: Some, at least some Japanese people were honorary Aryans in Hitler’s race pseudoscience.
Neil: Yeah. They have to build a pseudoscience with honorary Aryan-ship.
Once you get a unified controlling Nazi Germany, and a unified controlling Imperial Japan, if World War Two plays out that way, I don’t know if honorary being an Aryan is going to last very long.
There’s going to be some conflict eventually, and it’s not going to hold together.
Getting to world government
Gus: Nations can hold themselves in this kind of game-theoretically stable situation in which there’s mutually assured destruction with nuclear weapons. If these governments are at least somewhat rational and not like a North Korea, so the world, the world you are imagining is it a world in which we get to a world government or is it a world in which there is a level above nation states that kind of regulates the interactions between the nation states. Because if it’s the world government, then I think you lose the advantage of nuclear weapons, keeping everyone in check.
Neil: Good. Good. So let me show you how you get to the world government.
So suppose you have your multiple empire scenario. And this is what I think plays out to this century. So there’s going to be a Chinese empire, ruled in Beijing, there’s going to be a Russian empire ruled in Moscow, and there’s going to be a bunch of liberal democratic entities that play reasonably well with each other.
Which ones they are, things might flow in and out of that, we’ll see whether America can stay a liberal democracy or if certain kinds of people take over there and decide, we want to be strong ethnic nationalists. Really America is the land of the white people and the white blood. What is, this is a unified blood, but they’ll do it that way. Something like that. If they do that, they can set up their own ethnic nationalist empire over there. Maybe in some way this becomes a bunch of racial empires with the Chinese one versus the Russo-American white one or something like, I don’t know what kind of nonsense ends up being here.
Russo-American holds together maybe better than Nazi Germany- Imperial Japan. But let’s have a, let’s have a slightly better scenario, some liberal democracy that includes America holds together. That’s what I’m really playing for in US politics.
And I think this is a point which some EAs, not the movement as a whole, because the movement, there’s some reasons for the movement as a whole to stay nonpartisan and not make enemies, but there are individual EAs who are just like, look, we can not have America fall into authoritarian ethnic nationalism.
And I think that’s absolutely right. You cannot have that if you’re an EA, because it disrupts the path to resolving your international problems. If you have a bunch of empires it’s much easier for a North Korea or a corporation or an entity within one of those things, maybe there’s a secret project being brewed in Moscow or Beijing or in Washington or something like that where somebody’s building something.
And that goes nuts. A lot of our cold war doom scenarios, Dr. Strangelove, is that kind of scenario. They built something in secret and they haven’t told anybody and oops, the way that the thing goes, blows everything up. Okay. So there’s a lot of ways that could play out. But suppose you have your multiple empires and you have a really robust, liberal democratic community as well.
I think what happens here is that liberal democracy wins. It has a winning end game that no authoritarian empire has, because if you’re China, you got to sell the world on Chinese nationalism. And how are you going to sell Moscow on Chinese nationalism?
How is this going to work? Cold war, they tried to do that. The Russians went down to Africa. They’re like “Russian nationalism?” and Africa was like, “no”. And on the other side, you have freedom, democracy, elections coming from the West and some people in Africa like that. So you have to sell things in a way that is not headquartered in Moscow and Beijing. If you want to have the world come along.
And that’s an advantage that liberal democracy has. Liberal democracy can make alliances internationally, much more easily. Ethnic nationalists face an end game that is really hard to win, but liberal democracy, if you get enough of us, here’s what we do. We just say, okay, you over there authoritarian, even ethnic nationalists authoritarian, we have a plan for you.
And this is my plan for any ethnic nationalist authoritarian. Suppose we ended up with everybody’s in except China and China is one ethnic nationalist authoritarianism. They’re the last one. We say to the last group of people there, okay, look, we’re going to buy you out. Here’s the deal. Look, you’re up against us.
And if we fight, it probably ends everything. You have nukes. We have nukes. That’s no good. We can sit here as two separate empires and just stew and stare at each other. But if we can’t get a global framework, maybe North Korea pops up while we’re not looking and just blasts us all because little players, if we don’t have a world government can do that and get away with it.
Here’s what we do. All of you who are in power right now in China, you go like awesome constitutional monarchy. You get to party like celebrities and your descendants get to for as long as you can imagine, having descendants. There is going to be just like, and then you get absorbed into wealthy tabloid celebrity, have fun world, go do that.
And then those of you who really want to run things, just run businesses and stuff that fit within a unified global, liberal democratic structure. So all the selfish players who just want to have fun, you guys go have fun. All those who want to run things, you can run things in the new system and you can actually run some better things because we’ll let you into our system.
Once you’re, playing by rules where everybody can play. And what’s best of all, there is no North Korea like entity that can kill us all. So now you can have security, you can have fun, you can have profits, you can have all the science and progress and dream of the happy world.
You can have all of that. Just let’s put these nukes away and make sure nobody builds them. And nobody builds the “design your own deadly humanity ending virus” kit. So yeah, we’ll do that on global democratic policy, we’ll kill all the nukes. Paradise.
Gus: So is the plan to present these kind of dictatorial world leaders and say that the top of the governments in China and Russia with a way to stay powerful and stay famous and stay rich without actually commandeering these nations in bad directions and in risky directions?
Neil: Yes, that’s the idea, that’s the basic idea. That’s the deal that I want, the liberal democratic international community to offer these various, authoritarian, nationalist entities. And that’s a deal that I think they might take in the end if it’s presented sweetly enough and built up to enough.
And really the other thing you need to do is make sure liberal democracy is powerful enough, to really make that deal as a powerful player. So what I want EAs to do with our, the goodwill, we might’ve gotten in Africa or India with our local people, is just go out there and just raise some kind of like happy pro-liberal democracy, Africans, and Indians, where they are, and just, cultivate that, build that.
And now we’ll have leaders that are supportive of that kind of thing who maybe the EA community has taken lots of efforts to educate, yeah. And just raise some leaders from over there. So that’s like the EA side, but the US and Western Europe side is, let’s try to just be good supportive global actors, do lots of foreign aid, raise goodwill, say, join up with the liberal democracies and do well. Where people, for example where China might have set some developing countries up into debt poverty traps, go to the developing country and find some way to get them out of that trap.
If it involves us buying you out of your debt or something like that, we’ll buy you out of your debt, just come to our side and don’t be trapped by China. So that’s worth the money. If we get an ally out of it, and let’s buy some of these.
Gus: So the value proposition for the dictators around the world: you will stay an aristocratic person and your kids will be taken care of, your descendants forever.
So I, this is an interesting way to look at the problem because leaders are self-interested. And and they’re worried about losing power and maybe getting getting executed, which happens with relative frequency for dictators. One thing, that might prevent this from happening is kind of public repugnance at the idea of the leader of North Korea, who dominated this people and, did horrific things. He is now a kind of star that we follow the life of. Do you think this would be realistic?
Neil: We’ve made deals with bad people many times before, so if it’s more of those and safety’s at the end of it I think we’ll be able to pull this one off.
Gus: Yeah, I agree that it would be a price worth paying. But what about the, so if we convince dictatorial world leaders to give up their nuclear weapons, don’t we lose this advantage of the kind of mutually assured destruction that prevents totalitarianism?
Neil: If that’s the way it’s going. And it’s global democracy, is in there’s one government over the world, democratically elected by every human being in the world. The thing I like about that structure is, now we’ve found a structure within which collective action problems are nicely solved. Everybody within this structure, you won’t have this division where maybe one small actor is taking, one in 1000 risks of destroying everybody for awesome profits.
So if we can stamp out those collective action problems, and it’s global structures that are really good at doing that, where all the people… like, I don’t want to be smashed by some corporation that is risking everybody on Earth’s life for profit. Then I’ll push for regulation to stamp out whatever kind of technology could kill us all.
That’s how you get the nuclear weapon weapons put away. So you have all these authoritarian dictators with their nuclear weapons, and you’re like, let’s fold this into one big system. And they’re like, “okay, we’ll take all the bribes and incentives” and do that. And you’re like, “okay, in this one big system, look, we’re all one country now”. We’re all one country and all the dangerous toys get put away. So yeah.
Centralization verses decentralization
Gus: So the worry here is centralization. And a central point of failure in the leadership of the world government that could result in a previously liberal and democratic world government turning totalitarian. If there’s no option to exit one country for another country, if you dislike the policies of one country. A world with world government is a world without alternatives for citizens in a sense. And so how how would we prevent centralized power from being corrupting?
Neil: Yeah. So one of the things I see in the cases where we’ve had disasters with centralized power when we’ve had them in the sort of nation state period, let’s say over the last 100-200 years. So I think, I, democratic peace theory comes in various versions and claims about the ability of democracy to prevent things like famine come in various versions.
But hey, Amartya Sen won the Nobel prize for a very good reason. And his argument is actually prefigured by Bentham because Bentham thinks if you have democracy, political power will be aligned with total utility because it’s fundamentally controlled by the smart utility-having entities. Now getting the animals in and getting the future people in is tricky, but at least you have something where the present people are good at not getting themselves killed.
Now, there are some places where democracy has done something weird and probably the best example of this, where democracy did something that was unhappy was Nazi Germany, falling back, or being created, by the collapse of the Weimar state. But a funny thing about that case is first of all, you can get all kinds of instability when you have formation of like initially new government.
So we’ll have, I don’t, I can’t boast over the world government that it will be initially easy to set up and there will be some trouble at the beginning. But once the thing gets stable, democracy has a tendency to trundle on, even in the US the only reason Donald Trump won and he got fewer votes and we’d built something bizarre into our constitution that got the fewer votes person winning.
So there was all kinds of imperfect democracy in the first place that allowed this to even happen. So we have cases like that. The other big thing that’s in the Trump case, and in the Hitler case too, is rises of ethnic nationalism. So that’s where you get a majority and ethnic majority party forming very cohesively, and then following dumb ideas that happened to be part of the special set of ideas of that ethnic majority, and then pursuing those. In Trump’s case, it was various forms of racism that his followers were fans of that would get them to want building a wall on the Mexican border.
In the Hitler case, we know what that was. It was German nationalism of a certain kind Aryan nationalism of the type that you saw there. So when you have a world government, ethnic nationalism is a little bit harder because humanity as a whole is running it and mix of entities it’s, you just don’t get the dynamics of “the 90% of us are one way let’s smash all the small minorities and make a pure racial state of us”.
You just couldn’t do that in a global democracy. The votes just don’t work out.
Global totalitarianism
Gus: What about the other side? There’s an in-built protection against the ethnic nationalism in a liberal democratic world government, but is there an inbuilt protection against kind of the Stalinism or communism as we saw it in the USSR?
Could we imagine a situation in which and here I, this, what I’m drawing on here is a, it’s a paper by Brian Caplan from one of the earliest collections of papers on catastrophic risks, in which he worries about totalitarianism as a stable system that prevents us from reaching our potential.
And there might be surveillance technology, imagine extremely advanced surveillance technology that keeps a centralized state in power, even if it’s mistreating its people, because it’s, it could be capable of manipulating the wishes of the population, manipulating what can be said, what cannot be criticized. Yeah. Is this something to worry about?
Neil: I don’t see how this scenario develops. So suppose we’ve already got our liberal democratic world government set up, everybody around the world is voting. They have a secret ballot. They can vote as they want, if the government’s bad in some way, as long as they have the secret ballot and they can vote as they want, they’ll vote for somebody who decides less surveillance if it really is that bad, or surveillance that doesn’t cause whatever problem is being caused.
So as long as we get robust enough democratic structures, we just hold that off. Now maybe there’s some kind of way to break the system into that, but I just don’t see what it is. And then you can, once you have a world government, you can build up even more. If you’re seeing certain potential threats arise of that nature, I don’t see how it would form.
You could just set up some constitutional rights that are really hard to fight through to defend your structure effectively so that there’s a lot of things you can do to prevent that. So I don’t see how that comes up. I can see why somebody would think, okay. We well, yeah.
Here’s one of the things about the Soviet Union situation. Look, what’s going on in Russia in the early 1900s is really grim. This is just not a well-run state. This is a low life expectancy. And if you look at the life expectancy or in early Russia, we’re just talking about, at some points, this is below 40 years. If I recall correctly, definitely both 50.
It’s just, it’s pretty miserable there. And what’s going on with the actual governance you’ve got, okay. You’ve got Rasputin to put it that way. You’ve just got like random clowns showing up, convincing the queen that they can cure the prince and then getting massive political power out of this. This is just not a state that’s run well.
And once people are wealthy enough, they don’t want to participate in bomb-throwing revolutions. They don’t want to do instability. They want to own stocks and they want their daughters to have piano lessons. So yeah, they’re doing that now and they aren’t going to cause trouble. So if you get enough people like that who are good, middle-class, liberal democratic voters who don’t want to do violence, the energy for authoritarian upheaval is much weaker.
Maybe there’s some new way to get that kind of state. Maybe there’s something and maybe an AI could do it in some dangerous way or something like that at some future point. But it doesn’t look anything like the ones we’ve seen before. And I think those early models are just so, they’re obsolete models for how to set it up.
There might be some future model that can set it up and people need to look into that. But really if the kind of power that could set that up within a liberal democratic world government, is a kind of power that can probably blow up the world under the other. I just don’t see that we are losing on even the totalitarian world government scenario, because the other scenario seems to be destruction. If we don’t go for world government, we don’t survive the centuries. As far as I can tell.
Gus: Let me paint you a scenario, and I don’t know how credible the scenario is, but imagine we have a liberal democratic world government and we set up a, we set up an agency within this government to monitor for engineered pandemics. And for this purpose, we’re interested in checking the communication of everyone, whether they’re sending these strings of information that could potentially be turned into deadly viruses.
Okay? Imagine that this agency wants to uphold its own existence. It wants to continue existing and continue getting funding, and we want to become promoted within the agency. So again, we’re thinking of governments, government agents, as all agents, as self-interested.
And so imagine they, for this purpose, they must continually find risks and the definitions of what the risks are, are expanding. And so you could imagine this becoming, like a secret police with future AI technology to, to check everyone’s communication continually. And then say that the leaders of the world government are not especially satisfied when people critique them. Why not stamp down on this to stay in power. Again, I don’t know how credible this is, but it’s one possible scenario, maybe.
Neil: What I’m seeing in this is the power of universal surveillance must be controlled very tightly. We’ve got to make sure no agency gets able to do that.
Now there are tricky things in making sure that agencies can’t do that especially as they could do it very quietly. That’s the thing that’s really risky. I don’t have a solution to the quietness problem immediately, and I’d require some people who are smart about technology to look into what you’d have to do about that.
And definitely the future of governance structures. We can even design future governance structures that, anybody who has that power has to share it with N other entities or something like that, who can also check the work and lots of checks and balances would have to be in the system.
One nice thing that humanity has achieved to some extent is that some societies are pretty good at keeping the deadly weapons that you can use to kill civilians from being used by the people whose hands there aren’t in the kill, the civilians and societies are better and worse at this. The police violence situation in the US is an example of some people getting out of hand with this and problems result.
But you can regulate properly on the immediate ability to kill. And that’s a really powerful ability, if that gets out of hand, I actually worry a little bit that in the US it might be a little bit out of hand, if politicians are somewhat afraid, the police might shoot them or something like that. If they talk against police interests.
So there may be something of that kind going on in certain places in the US, I worry. And if you get, let something have that kind of power, it’s dangerous, but, we have ways in some places of properly making sure that if anyone’s going to use that kind of power, they have to explain and answer to a whole bunch of other people about what they’re using.
So if you build up those kinds of structures, and places successfully have. Not, everything is an immediate gun to your head state in the world. So if we can just build it up like that, use whatever checks and balances system, a suitably modified, that we use to control all the arms and put it over the surveillance. Seems pretty good to me.
Gus: Yep. Okay. So a world government, if it’s liberal and democratic would definitely be an improvement for billions of people around the world. One other worry is that government, like this would be stable in a way that prevents experimentation. So I see progress as basically conjecture and refutation, so we want experimentation in government structures and in ways of regulating society, and maybe a world government will prevent this. Maybe we wouldn’t have information about which system s are better than others because we cannot experiment.
Neil: This is something I think to keep in mind when designing global governance structures. But one thing that I think would at least mitigate that worry would be just understanding what the function of the world government is and what the function of local governance would be. So the function of the world government, as I see it is to solve collective action problems that affect the entire world.
And these technological risks are examples of collective action problems. Militaries are things that we have because we have borders across which hostile entities might be. You get rid of them. Defense spending goes to zero, unleashes so much money. I really like that part of world government.
So we can just, now spend it on fun stuff and on science and keeping us safe and universities and whatever, all that will stay up. Now when we have a world government solving these collective action problems, there’s plenty of room for all kinds of local governments to do things like managing local city infrastructure, because really, the world government, why would it need to do that?
And if people want to have in China, if they want to have their festivals a certain way, and in France, if they want to have different festivals. Fine. Local governments will put those on and all kinds of things, can be done, just which body is the best at handling this. Collective action problems that affect everybody go to the world. Stuff in your town, go to the town.
And so you’ll have experimentation between all the little towns and cities and all that. And then they can pick up the best from each other, but you might lose experimentation in how to solve global collective action problems. We should be slow in that because that’s what we’re doing to prevent the end of the world. Going a little bit slow on things that, if we get them wrong, we don’t get another chance is I think justified.
Gus: I’m genuinely undecided about this issue. I find it extremely difficult to reason about the trade-offs in centralization versus decentralization. It’s a, I think it’s an evolving topic. I think effective altruism needs to think more about this issue.
Neil: Yeah. I’ll, give my support for democratic centralism to solve collective action problems.
What is EA doing wrong?
Gus: Let’s end by discussing what effective altruism is doing wrong. This is one of the favorite topics for my listeners, is to hear how they’re wrong. You’ve been a member of this movement for let’s say a decade. So what’s interesting would be to know where your views diverge the most from the effective altruism mainstream views.
Neil: One mistake is to think that effective altruists are unified enough that there’s one thing we can all be doing wrong. Yeah.
Gus: True. True.
Neil: Yeah. As a global EA myself, I’ve just been through so many different communities and, the two big centers of gravity for EA. Oxford where a bunch of Giving What We Can Stuff is headquartered, and the Bay Area where a lot of our pro -EA billionaires and AI people are. So we have these two centers. And the Bay Area culture, I think, is something that a lot of people have a bunch of feelings about, that does express itself in the movement.
But, we have a lot of other places too, and I’ve seen EA chapters in Hong Kong and Singapore and all over the world. When our Africans get going I look forward to that, for Indians get going, especially I just look forward to the EA kids of the future, the ones who like are just saved generations, who they themselves, took the deworming pills.
They themselves have the bed nets. Oh, when they rise, it’ll be beautiful. And they’ll just have a diverse population of people. It’s just moving that way. The ideas are mainstreaming. And as they do, we just get more diverse. So that’s really good. And I’m just, I’m very optimistic about the movement. There is a lot of little things that I could pick on here and there.
I wish more people were into your naturalistic realist utilitarian combination in the philosophy, too many anti-realists out there in the, in the real world EA side who don’t know how much objective and universal moral value is worth for the project. And too many non-naturalists in the meta ethics side, who have bought into the dogmas of the past 25, 30 years in metaethics and don’t know that good old British empiricism with Hume and Hutcheson and all these good folks, Adam Smith and Locke are back there to save the day and get us into the ethics of a Mill and Bentham. That’s all out there, but, yeah I’m optimistic about the movement.
I really think we do have a lot of good people pushing us in a good direction, and we’re only going to get better ones as time passes. Sorry to disappoint, but we’re doing really well.
Gus: Okay. There’s been a development in effective altruism from the early beginnings. There was a lot of focus on global health charities. Rating the effectiveness of global health interventions. As the decade has passed, we’ve seen more focus go towards preventing existential risks from nuclear, from pandemics, from AI. And I don’t want to overstate this because the original, kind of the origins of EA involves people such as Eliezer Yudkowsky who was very into AI risk in the, mid-2000s. One of the central figures Holden Karnofsky, William MacAskill, Toby Ord were aware of these risks and interested in them from the beginning.
Do you see EA changing focus, again, as radically as we might characterize it as having changed. Is there some other place we could land? Is there some development towards another focus that we havn’t yet discovered?
Neil: I actually have suggested one to you just in the last hour or so, which is a political focus towards global liberal democracy. That’s where I think the longtermist solution pushes us. The global poverty people I think will push us there to actually help us get there by cashing up the poor, because it’s, it’ll easier to just get liberal democracy with Africa being an economically productive, reasonably well-to-do place. Because it’s just, when you have inequality of a massive kind just getting a unified structure around everybody is tough because the rich Westerners are like, do I have to give away all my money to the Africans?
How do we get one unified political system for people who might believe completely different things because they, have different educational situations back there. Okay, so you have much more trouble there, but suppose you just raise everybody to a similar level. Now, global poverty is coming together with the international cooperation people, and now we’re working together to do the long-term thing.
So I actually see those two sides as unified in the kind of project that I’ve suggested. And once you get liberal democracy globally, and once Africa gets to vote on social welfare policy, all right, now some money goes out there and we can do it in a proper formalized, government run kind of way.
And get all the sort of power of that behind it, get all the efficiency that you can get once you really try to do that. Give people a good. Some kids growing up with unfortunate parents situation and doesn’t have enough resources, make sure the kid gets food, and gets all the resources that are needed.
Children growing up in poverty is just human capital on fire. Just put out the fire. This is just a huge thing that needs to be done. I’m for massive redistribution. Destroy incentives and put out that human capital fire that we’ve got burning in the world.
That’s my dad, goodness knows what happens to him if he doesn’t eat. And then I might not get born, I don’t really know what happens in the sixties and seventies. He probably survives that, maybe he marries my mom, but maybe he like, doesn’t do well in school because he couldn’t eat and just falls out of school. You know Could happen.
Yeah. So right. It’s just, there’s so much we could do where I think the EA foci are coming together, in a nice way. If you want to deal with wild animal suffering, having a world government is going to make it a lot easier. So yeah, there’s just all kinds of problems that gets solved together at once by the structure.
And I really see the EA movement, maybe it discovers some new cause areas, but I don’t see anything that pushes it towards some radical tearing apart. We have disagreement, we’ll have disagreement. So far, I’ve seen healthy disagreement. There’s a bunch of dogmatic people who are dogmatic, about AI being the big thing, global poverty being the big thing and AI is stupid or animals. There’s dogmatic people, but there’s a big core people who understand that all these causes are important in some way.
And are just thinking about them together. And I think that’s the way to think about them.
Gus: Thank you for spending this time with me. Your enthusiasm is wonderful. The way you present your ideas, it’s convincing with with enthusiasm.
Neil: I’m honored to hear that, Gus, thank you so much for giving me this opportunity.