In December 2022, Tyler Cowen gave a talk on effective altruism. It was hosted by Luca Stroppa and Theron Pummer at the University of St Andrews.
You can watch the talk on YouTube, listen to the podcast version, or read the transcript below.
Transcript
The transcript was generated by OpenAI Whisper. I made a couple of minor edits and corrections for clarity.
Hello everyone, and thank you for coming to this talk. Thank you Theron for the introduction.
I find effective altruism is what people actually want to talk about, which is a good thing. So I thought I would talk about it as well. And I’ll start by giving two big reasons why I’m favorably inclined, but then work through a number of details where I might differ from effective altruism.
So let me give you what I think are the two big pluses. They’re not the only pluses. But to me, they’re the two reasons why in the net ledger, it’s strongly positive.
The first is that simply as a youth movement, effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I’ve observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that’s very much to the credit of effective altruism. Whether or not you agree with everything there, that to me is a more important fact about the movement than anything else you might say about it. Unlike some philosophers, I do not draw a totally rigorous and clear distinction between what you might call the conceptual side of effective altruism and the more sociological side. They’re somewhat intertwined and best thought of as such.
The second positive point that I like about effective altruism is simply that what you might call traditional charity is so terrible, such a train wreck, so poorly conceived and ill thought out and badly managed and run that anything that waves its arms and says, hey, we should do better than this, again, whether or not you agree with all of the points, that has to be a big positive. So whether or not you think we should send more money to anti-malaria bed nets, the point is effective altruism is forcing us all to rethink what philanthropy should be. And again, that for me is really a very significant positive.
Now before I get to some of the more arcane points of difference or at least different emphasis, let me try to outline some core propositions of effective altruism, noting I don’t think there’s a single dominant or correct definition. It’s a pretty diverse movement. I learned recently there’s like a sub movement, effective altruism for Christians. I also learned there’s a sub sub movement, effective altruism for Quakers. So I don’t think there’s any one way to sum it all up, but I think you’ll recognize these themes as things you see appearing repeatedly.
So my first group of themes will be those where contemporary effective altruism differs a bit from classic utilitarianism. And then I’ll give two ways in which effective altruism is fairly similar to classical utilitarianism.
So here are three ways I think current effective altruism has evolved from classical utilitarianism and is different:
The first is simply an emphasis on existential risk, the notion that the entire world could end, world of humans at least, and this would be a very terrible thing. I don’t recall having read that, say, in Bentham or in John Stuart Mill. It might be in there somewhere, but it certainly receives far, far more emphasis today than it did in the 19th century.
The second point, which I think is somewhat in classical utilitarianism, is this notion of legibility, the idea that your standards for where to give money or where to give aid should somehow be presentable, articulable, reproducible. People ought to be able to see why, for instance, you think investing in anti-malaria bed nets is better than, say, giving a lot of money to Cornell University. So maybe that’s a sociological difference. It may or may not be a logical requirement of effective altruism, but again, it’s something I see quite clearly in the movement, a desire for a certain kind of clarity of decision.
The third point is what is sometimes called scalability. That is the notion that what you can do when its final impacts are scaled up might be of truly enormous importance. So for instance, if we sidestep existential risk and keep on growing our civilization in some manner, you might decide that, well, 10,000 years from now, we can colonize the galaxy and there can be trillions and trillions of individuals or cyborgs or whatever doing incredible things and that there’s this potential for scale and the highest value outcomes and that that should play a role in our decision making. So the point of scalability being a third difference between current effective altruism and classical utilitarianism. You can actually find Sidgwick being concerned with scalability as something important. Parfit obviously was somewhat obsessed with scalability, but still overall, I think it’s a big difference between current effective altruism and what you might call classical utilitarianism of the 19th century.
Now, I’m going to get back to each of those. Here’s two ways in which effective altruism to me seems really quite similar to classical utilitarianism:
The first is simply a notion of the great power of philosophy, the notion that philosophy can be a dominant guide in guiding one’s decisions. Not too long ago, I did a podcast with William MacAskill. You all probably know him and his work. I think the main point of difference that emerged between the two of us is that Will has a very ambitious view for what philosophy can be, that it ultimately can in some way guide or rule all of your normative decisions. My personal view is much more modest. I see philosophy as one useful tool. I think there’s just flat outright personal prudence. There’s managerial science, there’s economics, there’s history, there’s consulting with friends, there’s a whole bunch of different things out there. In my view, true prudential wisdom is to somehow at the personal level have a way of weighting all those different inputs. I find when I speak to Will or someone like Nick Bostrom, they’re much more “rah-rah philosophy”. Philosophy is going to rule all these things and they ultimately fall under its rubric and you need philosophy to make them commensurable. That I think is something quite significant in effective altruism. That’s one of the areas where I depart from what effective altruism, at least in some of its manifestations, would recommend.
Another notion you find both in effective altruism and classical utilitarianism is a strong emphasis on impartiality. The notion that we should be neutral across geographic space, people living say in Africa, Vietnam, wherever, they’re not worth less than people living in our own country. That people in our own village are not worth more. Across time, we should be impartial. A lot of different ways in which both effective altruism and classical utilitarianism suggest that most other perspectives, they’re simply far too partial in the sense of taking someone’s side and then trying to optimize returns for that side. That again strikes me as quite a strong feature of most effective altruism. On that too, I have some differences.
Let me introduce the sequence of differences with effective altruism by starting with this notion of impartiality. My view is at the margin, if I may be forgiven for speaking like an economist, virtually all individuals and certainly all governments, they’re far, far too partial. The effective altruist notion at current margins, I think is correct 100% of the time that we should be more impartial. I’m sure you all know Peter Singer’s famous 1972 article, The Child is Drowning in the Pond, You Can Save the Child, It May Soil Your Coat. Obviously, you should do it. You shouldn’t just say, oh, it’s not my kid. I don’t need to worry about this. At current margins, I’m fully on board with what you might call the EA algorithm. At the same time, I don’t accept it as a fully triumphant philosophic principle that can be applied quite generally across the board or as we economists would say, inframarginally.
Let me just give you a simple example. I gave this to Will McCaskill in my podcast with him, and I don’t think he had any good answer to my question. And I said to Will, well, Will, let’s say aliens were invading the earth and they were going to take us over, in some way enslave us or kill us and turn over all of our resources to their own ends. I said, would you fight on our side or would you first sit down and make a calculation as to whether the aliens would be happier using those resources than we would be? Now, Will, I think, didn’t actually have an answer to this question. As an actual psychological fact, virtually all of us would fight on the side of the humans, even assuming we knew nothing about the aliens or even if we somehow knew, well, they would be happier ruling over and enslaved planet earth than we would be happy doing whatever we would do with planet earth. But there’s simply in our moral decisions some inescapable partiality.
And this is a way in which I think David Hume, for instance, was by no means a pure utilitarian. He saw this partiality as an inescapable feature of human psychology. I would go further than that. I would stress that the fact that it’s an inescapable feature of human psychology means at the normative level, there’s just no way we can fully avoid partiality of some kind, even though you might think, as I do, like in all of our current real world decisions, we are far too partial and not sufficiently impartial. It seems to me there’s always a big enough comparison you can make, an absurd enough philosophic thought experiment where when you pose the question, should we do X or Y, it is impossible to address that question without having a somewhat partial perspective. So that’s the way in which I differ from effective altruism or some versions of it at a philosophic level, even though in terms of practical recommendations, I would say I’m fully on board at the margin.
Now it turns out that this view of impartiality, that you can’t be fully impartial… it’s going to matter for a number of our real world decisions. Let me turn to an example of where I think we cannot help but be partial. So when it comes to scalability, effective altruists stress the upward potential for these fantastic outcomes where we colonize the galaxy, there’s trillions of people, they may not even be people anymore, they might be uploads or we’re all transhumanist, or there’s just some wildly utopian future. I’m not sure we’re able to process that as a meaningful outcome. That doesn’t mean I don’t value the option on it or the possibility, but I’m not sure there’s any kind of utilitarian algorithm where you can calculate the expected value of an outcome so different from the world we are familiar with.
And let me bring this back to a very well-known philosophic conundrum taken from Derek Parfit, namely the repugnant conclusion. Now Parfit’s repugnant conclusion asks the question, which I think you’re all familiar with, like should we prefer a world of 200 billion individuals who have lives as rich as Goethe, Beethoven, whoever your exemplars might be, or should we prefer a world of many, many trillions, make it as asymptotically large as you need to be, many, many trillions of people, but living at the barest of levels that make life worth living, Parfit referred to them as lives of musak and potatoes. So in Parfit’s vision of these lives, you wake up when you’re born, you hear a little bit of musak, maybe it’s slightly above average musak, they feed you a spoonful of potatoes, which you enjoy, and then you perish. Now most people, because of Parfit’s mere addition principle, would admit there’s some value in having this life of musak and potatoes compared to no life at all, but if you think through mathematics, if you add up enough of those lives, it would seem a sufficiently large number of those lives is a better world than like the 200 billion people living like Goethe.
There’s a huge literature on the repugnant conclusion. I’m not really going to address it, but I’ll simply give you my conclusion. I’m quite sure that no one has really solved the repugnant conclusion. There are different attempts to wiggle or worm or squiggle one’s way out of it, but I think the actual answer to the repugnant conclusion is that the lives of all those musak and potatoes entities, they’re not really human lives as we understand them. If you consistently think through the existence of a musak and potatoes entity, I don’t think there’s any particular animal you could compare it to, but again, it’s not really intelligible in terms of our normal human existence. In comparing the lives of the Goethes to the musak and potato lives, I would just say we’re comparing two different species. When you compare two different species, I don’t think there’s a single well-defined unit of utility that enables you to say which one is better. There’s not an answer. I think the so-called answer, if you would call it that, some would call it a non-answer, but it’s Hume’s observation that we cannot help but be somewhat partial and prefer the lives of the recognizably human entities, the 200 billion Goethes. We side against the repugnant conclusion. There’s not actually some formally correct utility calculation that should push us in the opposite direction.
I think you can say the formal utility calculation is not well-defined. What we’re left with is our particularistic siding with the human-like entities. I think that reflects the actual intuitions we have in rejecting Parfit’s repugnant conclusion.
Now, once you admit that, once you say that in a lot of these comparisons, we are intrinsically particularistic and somewhat biased and have a partial point of view, I think when you look at issues such as animal welfare, you also have to wonder how much you can make formal comparisons. My view on animal welfare is that at the margin, we are very systematically underinvesting in the welfare of non-human animals, and this is a terrible thing. At the margin, we should correct it, but I don’t actually think there’s some aggregate calculus that you can add up and arrive at an answer of, well, what exactly should humans do to maximize total well-being on the planet, weighing off all human well-being against the total well-being of all non-human animals? Any view on that topic strikes me as intrinsically underjustified while admitting that at the margin, we’re not doing nearly enough to take care of the welfare of non-human animals. I just don’t think there’s a natural unit of comparison.
I observe very large numbers of EA people being vegetarians, being vegan. If I’m going to meet an EA person for a meal, I don’t even ask them. I just make sure like we go to an Indian restaurant where they can get some good food that’s not just meat. I assume at least 50% chance they’re vegetarian or vegan, and that’s fine. I think at the margin, it’s a better way to be. But at the same time, the notion that in a world with about 8 billion humans, that we have taken up about half of all inhabitable land for agriculture, and yes, we could make that more efficient, but by the end of the day, once we start addressing climate change by putting in a lot more panels for solar energy, we’re going to take up a lot more land in fact, producing green energy under most plausible scenarios. So it seems to me in any world with billions of humans, you end up doing something to a large number of species of animals, which resembles extermination or near extermination. I don’t think I have a normative theory that tells me how justified or not justified that is. I am pretty sure I have the partial perspective of a human being that just tells me, well, we’re going to do it anyway. There’s not an equilibrium where we like ceed the land to the cockroaches because we realize they’re a slight bit happy and you can fit more of them on my house than you can fit of me. Just not going to happen. So again, this notion that there’s something there at the margin, but at the macro level, there’s just not a meaningful comparison would be an area where I differ from a lot of effective altruists. Not all of them by any means, but I think a lot of them take these very philosophic, very utilitarian, very macro views of how we should make these decisions.
This also relates to what I gave as one of the definitions of EA, that they’re very concerned with moral decisions being legible. Again, I’m not opposed to legibility, but I think relative to most EA types I know, I see a large number of moral decisions—I mean, really a very large number of important moral decisions—that simply will never be legible because they are intrinsically partial to some extent. So I sort of admire the quest for legibility at the margin. I think it’s a good thing in the actual world we live in, but I’m not quite as enthusiastic about it as a macro normative paradigm the way some EA individuals would be.
A big area of difference that I think I have with a lot of effective altruist types is the emphasis on existential risk. So just if you look at the EA movement, what you might call sociologically, I see two different, somewhat separate parts of it. They’re intertwined in institutions, but the first part of EA, practically speaking, is the part of EA that does charity, invests in anti-malaria bed nets, tries to prepare the world for the next pandemic and so on, doing very what you might call prosaic things, but trying to do them better and more rationally than preexisting philanthropy had done. And I find I’m 100% on board with that, 110%. The second part of the EA movement is concerned with a whole series of investments in fighting existential risk. So very often this takes the form of worry that artificial intelligence will turn into some form of AGI that may pulp us into paperclips or destroy us or have its way with the world and then human civilization as we know it. It’s an interesting question, like historically, sociologically, why that’s become intertwined with the anti-malaria bed nets, but it has. And I think actually for some decent reasons. But I would just say I’m in general more skeptical about the extreme emphasis on existential risk than a lot of effective altruism would be.
One reason for that is simply how I diagnose the main sources of existential risk. So if you think the main existential risk worry is AGI, that would lead you to one set of actions. If you think the main existential risk is nuclear war or some kind of major war with other weapons of mass destruction, possibly in the future, it leads you to a very different set of actions. So I think nuclear war, just general warfare, is a larger risk than AGI is. So I would fully recognize the EA point that a major nuclear war is unlikely to kill each and every human being on planet Earth. But I think it would wipe out civilization as we know it, would set us back many millennia. And I don’t think there’s an obvious path toward recovery. And in the meantime, we’re much more vulnerable to all these other existential risks, like a comet or asteroid might come. And we can’t do anything to deflect it because we have something only a bit above the Stone Age existence. So I’m at least 50x more worried about nuclear war than at least some of the EA people I know who are more worried about AGI.
But now let’s look at the risk of nuclear war. Like what now are the two major risks of nuclear war, at least plausibly? We don’t know for certain. One would be the war in Ukraine where Putin has said he might use nuclear weapons. Possibly that could escalate. I don’t think it’s the likely scenario, but at this point you have to see some chance of that. And the other major risk in terms of a significant war would be China attempting to reincorporate Taiwan within its boundaries. The US has pledged to defend Taiwan. You could imagine there’s some chance of that leading to a major nuclear war. But then if you ask the question, well, which investments should we make to limit the risk of nuclear war over Ukraine or Taiwan? I’m very keen on everyone doing more work in that area, but I don’t think the answers are very legible. I don’t think they’re very obvious. I don’t think effective altruism really has anything in particular to add to those debates. I don’t mean that as a criticism. Like I don’t think I have anything in particular to add to those debates. They’re just very hard questions. But once you see those as the major existential risk questions, the notion that existential risk as a thing should occupy this central room in the effective altruism house, that just seems like a weaker conclusion to me, that yes, of course, we should worry about nuclear war. We should do whatever we can. More people should study it, debate it, whatever. But it just seems like an ordinary world where you’re telling people, well, think more about foreign policy, which again, all for, I don’t think effective altruism is opposed to that conclusion.
There’s plenty of EA forums where people debate nuclear war and would say things I pretty much would agree with. But again, at the end of the day, I think it significantly weakens existential risk as the central room in the EA house. And EA just tends to dribble away into a kind of normal concern with global affairs that you would have found in the American foreign policy establishment in 1953 in a quite acceptable way. But EA then just isn’t special anymore.
I think there’s a way in which EA people, they don’t take existential risks seriously enough. So let’s say you think in any year there’s a chance of a major nuclear war. My personal view is that chance is really quite small. So I’m not in that sense a pessimist. I probably think it’s smaller than most people think it is. But what I do do, and people hate me for this, I’m going to depress you all for the rest of your lives, is just ask the simple question. If we have a tiny risk in a single year, for how many years does the clock have to tick before that small risk happens? This is a well-known problem in financial theory. But basically, within the span of a thousand years or maybe shorter, there’s quite a good chance in my calculations that the world has something like a major nuclear war, or maybe the weapons are different by that point in time. But you get what I’m saying. And that even if you’re optimistic for any single year, you shouldn’t be optimistic over a time span of a thousand years.
This problem gets all the worse when you imagine that nuclear weapons right now, only a small number of nations can afford them. But the cost of producing those or other very powerful weapons is likely to decline if we continue to have technological progress. And so the number of agents who have the ability to destroy a big part of the world just because they’re pissed off, crazy, feuding with each other, whatever, that number of agents is really only likely to go up, not to go down. So once you think like the world as we know it has a likely time horizon shorter than one thousand years, this notion of, well, what we will do in thirty thousand years to colonize the galaxy and have trillions of uploads or transhuman beings, you know, manipulating stars, it just doesn’t seem very likely. The chance of it is not zero. But the whole problem to me starts looking more like Pascal’s wager and less like a probability that should actually influence my decision making. So that whole strand of thought, I just feel I’m different from it. Again, I would fully recognize the people would agree with things I would say about nuclear war. I view them as a positive force, raising consciousness about the risk of nuclear war. But just sociologically, the entities they have constructed around the issue of existential risk, they just feel very, very different from how I would view that issue.
Another way that I think how I view existential risk is different from how many of the E.A. individuals view it. Actually I look at many, in particular, artificial intelligence issues from a national perspective, not a global perspective. So I think if you could wave a magic wand and stop all the progress of artificial intelligence, I’m not going to debate that debate now, but I’ll just say I can see the case for just stopping it if you can wave the magic wand. Well, it’s too dangerous, it’s too risky and so on. We all know those arguments. But my view is you never have the magic wand at the global level. The UN is powerless. We’re not about to set up some transnational thing that has any real influence over those decisions. Different nations are doing it. China is doing it. And the real question to ask is, do you want the U.S. to come in first and China to come in second, or rather China to come in first and the U.S. to come in second? It’s just a very, very different question than should we wave the magic wand to stop the whole thing. Now, I don’t want to debate that question either, but I’ll just tell you my perspective, which is in part partial, as I discussed earlier, but in part I think just broadly utilitarian is I want the U.S. to win. So I think we should go full steam ahead on AI. And I think that for reasons I don’t hear sufficiently frequently from the E.A. community, namely that the real question is U.S. gets it or China gets it. Then you have to pick a side and you can pick the side for utilitarian reasons, or you might in part pick the side for reasons that are, you know, partialistic. But nonetheless, freezing it all is not on the table.
I even worry that the people who obsess over AI alignment, I suspect their net impact to date has been to make AI riskier by getting everyone thinking about these scenarios through which AI could do these terrible things. So the notion that you just talk about or even try to do AI alignment, I don’t think it’s going to work. I think the problem is if AI is powerful enough to destroy everything, it’s the least safe system you have to worry about. So you might succeed with AI alignment on say 97 percent of cases, which would be a phenomenal success ratio, right? No one like seriously expects to succeed on 97 percent of cases. But if you failed on 3 percent of cases, that 3 percent of very evil, effective enough AIs can still reproduce and take things over and turn you into paper clips. So I’m just not that optimistic about the alignment agenda. And I think you need to view it from a national perspective, not like where’s the magic wand that’s going to set all of this right. And in that sense, I think I’m different from EAs as I meet them.
I can readily imagine a somewhat reformed EA that would say, well, when we consider all possible factors, we should really take something more like Tyler’s view. But again, I think as you do that, this place that existential risk has as a central room in the EA house, that again tends to dribble away quite a bit.
Now I’m almost done. I believe no Zoom talk should be so long and we have time for Q&A. But let me just say a few things about EA as a movement. And again, I don’t think you can fully separate the movement from its philosophy.
I think for virtually all social movements, I have a saying, and that is “demographics is destiny”. And you may like this or you may not like it. But it is also true for people in effective altruism. It’s been true for people who are libertarianism. So if you look at early libertarianism, say in the 1970s, it had a very particular logical structure. But the actual descriptive fact of the matter was that a lot of libertarians were nerdy white males and some other set of libertarians were non-nerdy, somewhat rural, kind of gun owning anti-authoritarian, very US American white males who had a particular set of grudges. And what you end up getting from libertarianism is some evolution of those socioeconomic groups, which again, you might like or you might not like. But if you’re evaluating libertarianism, you have to think a bit, well, what are the natural constituencies? What will those demographics get you? Do you like all of that? And are you ready for the stupider version of the theory?
So there’s like the Robert Nozick super smart, very sophisticated libertarianism. But of course, that’s not what you get. You get the stupider version. So you have to ask with effective altruism, well, what is the stupider version of effective altruism and what are its demographics and what does that mean is its destiny? I have limited experience in the EA movement, but at least in the United States, and I would readily admit other countries may differ. In the United States, it’s quite young, very high IQ, very coastal, very educated, but broadly people with the social conscience. None of that should be surprising, right? But I think the demographics of the EA movement are essentially the US Democratic Party. And that’s what the EA movement over time will evolve into. If you think the existential risk is this kind of funny, weird thing, it doesn’t quite fit. Well, it will be kind of a branch of Democratic Party thinking that makes philanthropy a bit more global, a bit more effective. I wouldn’t say it’s a stupider version, but it’s a less philosophical version that’s a lot easier to sell to non-philosophers. Just so you end up telling people you should think more broadly about how you should give money at the margin, I agree with that. I’m fine with it. But it’s simply another way of thinking about what effective altruism really is, just as it is with libertarianism, just to understand that demographics is destiny.
I gave a talk to an EA group lately, and I’m always like a troll when I give these talks. That’s on purpose, but I’m saying things I believe, to be clear. I’m not making up stuff I don’t believe. So I said to them all, you EAers, you should be more religious, and a lot of you should be social conservatives, because social conservatism is actually better for human welfare. I didn’t mean that in the sense of government coercively enforcing social conservatism on people, but just people voluntarily being more socially conservative. I said a lot of you should do a lot more to copy the Mormons. Maybe now I would say the Quakers, other groups too. What was striking to me was how shocked everyone was by this point. I’m not saying they should have agreed or not agreed, but they hadn’t ever heard it before. Will McAskill said this in his podcast with me. He said, oh, where are all social liberals? That doesn’t really follow at all. It’s something you can debate. But the fact that so many people took it for granted to me gets back at this point. When it comes to EA, demographics is destiny, and the future of socially conservative EA is really not that strong, but it should be. And after my talk, a woman went up to me and she came over and she whispered in my ear and she said, you know, Tyler, I actually am a social conservative and an effective altruist, but please don’t tell anyone. I thought that was awesome. And of course, I didn’t tell anyone who she was. But the mere fact that she felt the need to whisper this to me gets back to demographics is destiny.
So in part, when you think about EA, don’t just think about the philosophic propositions. Think about what it is, what it is becoming. And again, in the United States, at least, it is a very specific set of sociological connections that will shape its future. I like many of those trends at the margin, but I would say not all of them. And just looking at EA that way would be another way in which I differ from a lot of the effective altruists themselves, noting that when I mentioned social conservatism, they just all were shocked. I don’t think they agreed with me, but for me, the important point was that they were shocked that I would bring up such a thing. They just took it for granted that effective altruist people ought to be social liberals.
Anyway, with that, I will end my formal remarks. I know we have a five-minute break, according to the customs of St. Andrews. That would be one of these customs that I think Coleridge would have approved of, St. Andrews being an endowed institution and Coleridge being one of the great critics of the effective altruist movements of his own day. So, we now take our break.
The talk was followed by roughly an hour of Q&A (not transcribed).
The Q&A begins 36 minutes and 40 seconds into the recording.
I don’t get his point about social conservatism. Does he mean that a mass appeal socially conservative EA will be more EA somehow than a mass appeal socially liberal EA? Or that EAs should recruit social conservatives to influence more segments of the population?
In addition to the other comment, I think he’s also indirectly pointing to the demographic trends (I.e. fertility rates) of social conservatives. Social conservatives have more kids, so they inherit the future. If EA is anti natalist and socially liberal, we will lose out in the long run.
(Not sure if the above is a view stated above is your own, or one that you attribut to Tyler—in any case just adding this as a counterargument :) for anyone reading this thread and finding this topic interesting)
This is only true if you think that political values are set at an early age and remain stable throughout life—or if you commit to unrealistic ceteris paribus assumptions about the future political landscape. Furthermore, there is also evidence that these beliefs can change, for example exposure to education is linked to liberalisation of social attitudes. So, even if social conservatives have more kids, an ‘anti-natlist and socially liberal’ EA could still inherit the future as long as it manages to persuade people to support it.
In general, this sounds like a ‘Demographics are Destiny’ idea, which might be intuitively plausible but to me comes off as quite ‘hedgehog-y’. You could always find a reason why an emerging majority hasn’t arrived in a specific election and will emerge at the next one, and then the next one, and so on. I think one can maybe make broad assessments of political future given demographic trends but, as always, prediction is hard—especially about the future.
I don’t endorse that view myself, but yeah pointing out that I think Tyler believes it.
My understanding is that he basically thinks norms associated with social conservatives, in particular Mormons—he lists “savings, mutual assistance, family values and no drug and alcohol abuse” in this NYT piece -- just make people better off. He’s especially big on the teetotaling thing; he thinks alcohol abuse is a major social problem we don’t do enough to address. I don’t exactly know if he thinks it’s more important for EA’s to adopt conservative norms to improve their own welfare/productivity, or if EA’s need to see the value of conservative norms for other people generally and start promoting them.
I don’t think he’s thinking of it as giving EA more mass appeal.
I listened to the interview yesterday. My take on what he said on this was rather that EA’s core principles don’t have to necessarily restrict it to what is its de facto socially liberal, coastal, Democratic party demographic, and that socially conservative people could perfectly buy into them, if they aren’t packaged as ‘this lefty thing’.
Cowen thinks there are limits to EA’s idea that we should be completely impartial in our decisions (that we should weigh all human lives as being equal in value when we make decisions, to the point where we only care about how many lives we can impact and not where in the world those lives are). He cites a thought experiment where aliens come to Earth and want to enslave humankind for their benefit. We don’t calculate whether more net happiness is generated if the aliens get what they want: the vast majority of people would always choose to fight alongside their fellow humans (thus being partial).
Cowen then claims that some degree of partiality is an inescapable part of human psychology, so we ought not to strive to be completely impartial. Not only does this run into Hume’s is-ought problem, as he’s using (what he believes to be) an empirical fact to derive an ought, but this doesn’t get to the core reason of why we ought to be partial in some situations. This matters because having a core principle would more clearly define what limits to our impartiality should be.
For example, I think the notion of personal and collective responsibility is extremely important here for setting clear limits: I am partial to, say, my family over strangers because I have relationships with them that make me accountable to them over strangers. Governments need to be partial to the citizens of their country over the citizens of other countries because they are funded through taxes and voted in by citizens.
Humans should fight on the side of humans in the war against aliens for two reasons: the first is that every human being is in a relationship with herself, making her responsible for not letting herself be enslaved. Secondly, one can include the idea of moral responsibility under the umbrella of personal and collective responsibility: even if only some humans are enslaved and there isn’t a personal benefit for most people to fight on the side of those humans, slavery is immoral, so we ought to fight for the rights and dignity of those people if there is something we can do about it. If a specific subset of humans engaged a whole race of aliens in battle (both sides were voluntarily engaged in the battle), and the winner didn’t enslave the loser, it would actually be wise to pick the side that would lead to the most net happiness, as mere tribalism is not a valid reason to be partial.
I think this is a reasonable response, but Cowen did anticipate the “slavery is immoral” response, and is right that this wouldn’t be a utilitarian response. You can fix that since there is an easily drawn line from utilitarianism to this response, but I think Cowen would respond that in this scenario we both wouldn’t and shouldn’t bother to do such fine reasoning and just accept our partialities. He does make a similar statement during the Q&A.
I’d contend that this an example of mixing practical considerations with philosophical considerations. Of course we wouldn’t stop during an invasion of little green men who are killing and enslaving humans and wonder.. “would it be better for them to win?” If you did stop to wonder, there might be many good reasons to say no, but if you’re asking a question of whether you’d stop and ask a question, it’s not a philosophical question anymore, or at least not a thought experiment. Timing is practical not theoretical.
If it was really all about partialities, and not practical, it wouldn’t matter what side we were on. If we showed up on another planet, and could enslave/exterminate a bunch of little green men, should we stop to think about it before we did? Of course we should. And while maybe you can concoct a scenario in which it’s kill or be killed, there would be little question about the necessity to be certain that it wasn’t an option to simply turn around and go the other way.
He fails to bring up the tension with short AI timelines which I think is important here. Lots of AI safety folks I’ve talked to argue that long term concerns about movement building, fertility trends, etc aren’t important because of AGI happening soon.
I think this tension underlies a lot of discussions in the community.
I find myself agreeing with quite a lot of what he says in this video as, on a personal level, the greatest difficulty I find when trying to wind my mind around the values and principles of EA (as opposed to effective altruism, in small caps) is the axiom of impartiality, and its extension to a degree to animals. Like, in some aspects, it is trivially obvious that all humans (and by extension, this applies to any creatures with sufficient reason and moral conscience) should be the possessors of an equal set of rights, but if you try to push it into moral-ethical obligations towards them, I can’t quite understand why we are supposed not to make distinctions, like valuing more those that are closest to us (our community) and those we consider wise, good, etc… That would not preclude the possession, at the same time, of a more generalist and abstract empathy for all sentient beings, and a feeling of a degree of moral obligation to help them (even if to a lesser degree than those closest to one).
Cowen would benefit by understanding that EA has to have a duality, encompassing both philosophical aspects and practical aspects. The goal of EA is to have a practical effect on the world, but it’s effect will be limited if it’s not persuasive. The persuasive aspect requires it to describe it’s philosophical underpinnings. It cannot rely solely upon a pre-existing partial point of view, because it’s a departure from pre-existing partial points of view. You might call it a refinement of reciprocal altruism which would be a widely shared pre-existing partial point of view that is not individualistic, that escapes some of our other biases that limit the scope to which that point of view is applied.
That said, since EA does not try to force or coerce, it is left with persuasiveness. Persuasiveness must be rooted in something, and philosophy is where all such roots begin for individuals if they are not natural or imposed by force. To disambiguate the preceding, it’s not formal philosophy I refer to, but the broader scope philosophy which every individual engages with and is what formal philosophy attempts to describe and discuss.
The problem with the way Cowen engages with trolley problems and repugnant conclusions is that these are formal questions, out of formal philosophy, which are then forced into conflict with the practical world. The difference between someone who flips the trolley lever or accepts infinite double or nothing 51% bets, and someone who doesn’t, is whether they answer this as a formal question, or do not.
As a formal question, everything practical is removed. That is not possible in a practical environment. Hubris, is necessary to accepting 51% double of nothing bets. Hubris, is what would have prevented FTX’s path, not a different answer to a formal philosophical question.
You can actually feel Cowen is drawn toward this answer, but continue to reject it. He doesn’t want EA to change, because he recognizes the value it provides. At the same time, he’s saying it’s wrong and should be more socially conservative.
I’d disagree with the specifically socially conservative aspect, but agree with the conservative aspect. In this case I mean conservative in it’s most original form.. cautious about change. That said, I’m not convinced that the EA movement overall is not conservative, and I would not agree that “social conservatives” are all that conservative. The typical “social conservative” is willing to make grand unsubstantiated statements, advocate for some truly repressive actions to enforce maintaining a prior social order (or reverting to one long since abandoned).
Being socially conservative does not make you conservative. This narrow form of conservatism, can put you in conflict with other aspects of conservatism, and I’d argue it has put today’s social conservatives at great odds with it. In addition, there are large groups of people we allow to use the social conservative label who are regressive. Regressivity is not conservative, as it’s no longer attempting to maintain the status quo. An attempt to regress will without a doubt have unintended consequences. It’s the existence of unintended consequences and the hubris to accept that you cannot see all of them that is the only value proposition to conservatism, so once this is abandoned there’s no value left and we really shouldn’t accept the use of conservative to describe such groups.
So, in short, EA should continue to engage on it’s formal side, but should also continue to embrace practical principles during the translation of formal ideas to practical reality. If an alien does come to us and offer endless 51% double or nothing bets, we should be skeptical of the honesty, ability, and all the margins that go with that around accepting that bet. In like ways, when operating a financial company, someone should accept the possibility of downside, the inability to forecast the future, the reality of margin calls, the sometimes inscrutable wisdom of regulation, and the downsides of disregarding rules, even when it seems (to us) like we know better.
Forcing EA to divorce it’s practical and philosophical sides and then attacking the philosophical side using practical arguments is either dishonest, or a failure to understand EA. Accepting “partial point of view” as an absolute, provides no room for EA to argue for any change. Conservatism may have a place, but it would be internally inconsistent if it ever was all encompassing, because the one thing that has always been constant, is change. Conservatism that rejects change, rather than simply applying some caution to it, posits an impossibility, change cannot be halted, only managed.