AMA: Jason Brennan, author of “Against Democracy” and creator of a Georgetown course on EA
I plan to start answering questions on Friday, August 20th.
About me
I’m the Flanagan Family Professor of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. I’m the author/co-author of fifteen books, including Business Ethics for Better Behavior (Oxford University Press, 2021), Why It’s OK to Want to Be Rich (Routledge, 2020), Cracks in the Ivory Tower (Oxford University Press, 2019) and In Defense of Openness (Oxford University Press, 2018). I work at the intersection of politics, philosophy, and economics, often focusing on the normative and empirical analysis of perverse incentives, on taboo markets, or on democratic theory. My most famous book is Against Democracy (Princeton University Press, 2016), and my books have been translated 25 times into 13 languages.
At Georgetown, I teach a range of courses, including “Managing Flawed People,” “Social Entrepreneurship, Non-Profits, and Effective Altruism,” “Business-Government Relations,” “The Structure of Global Industries,” and “The Moral Foundations of Market Society”.
I recently won a $2.1 million grant from the Templeton Foundation for a 3-year project on “Markets, Social Entrepreneurship, and Effective Altruism”. The funding will be used for visiting faculty, conferences, case competitions for social entrepreneurship projects, student research, pedagogical materials, and to fund the expansion of a social entrepreneurship project throughout our MBA program and at other universities.
The Ethics Project
The keystone project in each of my courses is the Ethics Project. You can read a lot more about it here and a little more about it here. Here is NBC Nightly News coverage of one student project.
The Ethics Project’s basic idea is simple:
Think of something good to do. Do it.
At the end of the semester, students, working in groups, are asked to make a presentation and write a report to answer a wide range of questions, including:
How did you interpret the imperative to do something good, and why?
How did you consider the trade-off between what’s best and what’s feasible?
What was your opportunity cost?
What obstacles did you expect to encounter, how did you plan for them, what obstacles did you in fact encounter, and how did you respond?
Did you add value to the world, taking into account the value of your outputs and the costs of all of your inputs?
Was your project a success, and how should we measure it?
What did you learn and what would you do differently?
Student projects range from the profound to the mundane. For instance, some students helped teenagers in a poor country start their own business, which quadrupled their family income in a short period. Others have started their own businesses on campus—with the most successful grossing something like six figures over a few years with about a 33% margin. One group installed plumbing features which saved the university tens of thousands of dollars in wasted water per year. Others have conducted fundraisers (with the record now at about $17,000), run events, purchased goods for charities and schools, and more.
The Ethics Project is an excellent way to teach business ethics, management, philosophy, economics, and effective altruism, because rather than asking students to talk about ethics, it asks to students to learn by deliberating, acting, and then reflecting on what they did.
Ask me anything!
I’d be happy to answer questions on any of the topics I work on, teaching ethics and altruism, academic life, guitar/amp gear geekdom, work-life balance, or anything else you find of interest.
Do you think making the moral case for capitalism could be a very important thing to do? My impression is that the case for it seems to have been lost among the young, which could have important effects down the line
I am a bit split on the data from polling younger people. Quite a bit of that data shows that they prefer the word/label “socialism” to “capitalism”. If you ask them whether socialism is better than capitalism, they say yes. But if you give them more specific things, such as asking whether the government should own all productive property or whether we should have markets, they tend to reject socialism in favor of capitalism, though not by a huge amount. Also, you see the memes going around where people use “socialism” to refer not to socialism, but to government-funded public goods and welfare policies.
Still, if people are confused, then demagogues can take advantage of them or they might end up voting for the wrong things.
I think the case for capitalism must be made not merely because some form of it works better than the alternatives, but because the empirics on immigration show that open borders with global market economies is the best and most effective solution to world poverty. Immigration beats both intra- and international redistribution in terms of its distributional and welfare effects.
However, socialism and open borders don’t mix well, because once you turn a society into a giant workers’ co-op, adding new members always comes at the expense of the current members.
Why should that be the case? Wealth and income of this giant worker’s co-op are not fixed, and why shouldn’t they scale with the number of members?
Let’s say you have a 10 person workers’ co-op which shares income equally. Each person now gets paid 1/10th the firm’s profit. Thanks to diminishing marginal returns, if you add an 11th worker who is otherwise identical, they will contribute gross revenue/have a marginal product of labor that is less than the previous added worker’s. When you divide the income by 11, everyone will make less.
This is a well-known problem in the econ lit. Of course, in real life, workers are not homogenous, but the point remains that in general you get diminishing returns by adding workers.
As a toy illustration, suppose that there two countries, Richland and Poorland. Everyone in Richland makes $100,000/year. Everyone in Poorland makes $2,000/year. Suppose, however, that if half of the Poorlanders move to Richland, their income will up by a factor of 15, while domestic Richlanders’ income will increase by 10%. Thus, imagine that after mass immigration, Richland has 50,000 Poorland immigrants now making $30,000/year, plus it’s 100,000 native workers now each make $110,000 a year. From a humanitarian and egalitarian standpoint, this is wonderful. Further, this isn’t merely a toy example; these are the kind of income effects we actually see with immigration in capitalist economies.
But this same miraculous growth looks far less sexy when it occurs in a democratic socialist society with equalized incomes. Imagine that democratic socialist Richland is considering whether to allowing 100,000 Poorlanders to immigrate. Imagine they recognize that Poorlander immigrants will each directly contribute about $30,000 a year to Richland economy, and further, thanks to complementarity effects, will induce the domestic Richlanders to contribute $110,000 rather than $100,000. But here the Richlanders might yet want to keep the Poorlanders out. After all, when the equalize income ((100,000 X $30,000 + 100,000 X $110,000)/200,000), average incomes fall to $70,000. Once we require equality, the Richlanders see the immigrants as causing each of them to suffer a 30% loss of income. While for capitalist Richland, the immigrants were a boon, for socialist Richland, they are a bust. Unless we imagine our socialist Richlanders are extremely and unrealistically altruistic, they will want to keep the Poorlanders out.
Things get worse once we consider how real-world ethnic and nationalist prejudices will affect things. In fact, people are biased against foreigners, especially foreigners of a different race or religion. However, the beauty of capitalism is that it makes employers’ pay to indulge their prejudices; it literally comes out of their pockets. Thus, it’s not surprising, despite what some journalists and academics claim, that when economists try to measure to what degree wage differentials are the result of employer discrimination, they find that at most it’s quite tiny.[1]
[1] Goldin and Rouse 2000; Betrand, Goldin, and Katz 2010; Bolotnyy and Emanuel 2018.
This might be a dumb question, but wouldn’t this theory imply that smaller companies are on average more efficient than larger companies (holding worker quality constant)? And isn’t the very existence of pretty large companies some degree of evidence against this?
The point he is making is about worker cooperatives, rather than firms in general. A widely recognised problem with worker cooperatives is that there are disincentives to scale because adding more workers is a cost to the existing coop owners. So, the point doesn’t apply to privately owned companies because adding workers do not get a share of the business
As presented, the efficiency claims seem to be agnostic about firm structure, while the worker coop-specific parts are about credit/profit allocation. (As usual, I could of course be misreading)
Yeah, I think this is an important point to make: There are lots of instances where adding more workers allows specialization. When it comes to literally an entire economy, though, things can get weird/complicated—which is why I think there are much better ways to explain why welfare-heavy states and open borders don’t mix well than by appealing to diminishing marginal returns to labor.
Your Richland-Poorland example is indeed illustrative, thanks. However, it seems the problem caused by immigration does not only occur when incomes in Richland were equalized before the immigration, but rather they also occur when people care about the degree of income inequality in their own country. So if Richlanders are free-market fans, but they do not like domestic inequality, they will want to keep the Poorlanders out.
The moral case for capitalism is lost...capitalism is unsustainable and leads to massive suffering. If EAs actually care about longtermism as they claim to, they ought to start seriously planning large scale economic transition.
I think almost all critiques of capitalism rest on a failure to understand what capitalism actually is. Capitalism is private ownership of the means of production. Socialism is public or common ownership of the means of production. Capitalism is not greed. Socialism is not benevolence and love. They are systems of ownership. Once you see this, a lot of criticisms of capitalism melt away.
This is one important contribution that Jason Brennan has made to philosophy—http://bleedingheartlibertarians.com/2014/06/socialism-%E2%89%A0-love-and-kindness-capitalism-%E2%89%A0-greed-and-fear/
If you think capitalism and socialism affect the social ethos then that is true, but you have to actually have to look at whether people are nicer in socialist countries like Cuba, Venezuela, the USSR, Vietnam in the 1980s and so on. It doesn’t seem like they are.
When you’re talking about sustainability, you also have to look at the real environmental performance of these different systems of ownership. It is sort of true that capitalism drives climate change because it drives economic growth. But recognising that is inconsistent with your claim that it leads to massive suffering (which I assume is human suffering). But lots of socialist countries have a terrible environment record. and many capitalist countries and social democracies (which are capitalist with redistribution) have very good environmental records—eg the UK, Germany, Sweden, Switzerland, France all now have low and dropping emissions per head.
To say that the only way to solve climate change is to take the economy into state hands is on the face of it a huge claim that doesn’t seem very plausible. State oil companies don’t seem to care very much about sustainability, for example. State-controlled projects can be directed to any end whether good or bad for the environment. Indeed, having monopolistic control of fossil fuels seems likely to do more harm. The UK used to have a nationalised state-controlled coal board that kept coal mines open long after they were economically viable. Thatcher destroyed this via privatisation. in that case capitalism was clearly good for the environment and socialism bad.
I’m surprised this comment was downvoted so much. It doesn’t seem very nuanced, but here’s obviously a lot going wrong with modern capitalism. While free markets have historically been a key driver of the decline of global poverty (see e.g. this and this), I don’t think it’s wrong to say that longtermists should be thinking about large scale economic transition (though should most likely still involve free markets).
I think a downvoters view is that:
It packs powerful claims that really need to be unpacked (“unsustainable...massive suffering”), with a backhand against the community (“actually care...claim to”) with extraordinary, vague demands (“large economic transition”), all in a single sentence.
It’s hard to be generous, since it’s so vague. If you tried to riff some “steelman” off it, you could work in almost any argument critical of capitalism or even EA in general, which isn’t a good sign.
The forum guidelines suggest I downvote comments when I dislike the effect they have on a conversation. One of the examples the guidelines give is when a comment contains an error or bad reasoning. While I think the reasoning in Ruth’s comment is fine, I think the claim that capitalism is unsustainable and causes “massive suffering” is an error. Nor is the claim backed up by any links to supporting evidence that might change my mind. The most likely effect of ruth_schlenker’s comment is to distract from Halstead’s original comment and inflame the discussion, i.e. have a negative effect on the conversation.
Capitalism could be worse than some alternative due to factory farming, climate change or various other global catastrophic risks, although we really need to consider specific alternatives. So far, I think it’s pretty clear that what we’ve been doing has been unsustainable, but that doesn’t mean replacing capitalism is better than reforming or regulating it, and technology does often address problems.
I don’t understand this claim/intuitively disagree with it as presented but don’t think I understand what you mean well enough to be sure I actually disagree.
I have in mind climate change and land use. If we kept consuming at current rates, wouldn’t we likely end up with catastrophic climate change?
If you include consumption trends, things look even worse, but we also have clean tech and government policy coming.
What is your view of population ethics? What do you make of longtermism?
People tomorrow matter. We cannot simply imposes costs upon them. As Feinberg argued long ago, if I left a time bomb underground that would explode in 200 years, when it kills people, I am a murderer.
Still, we have good reason to think overall that people in the future will be much better off than we are. That doesn’t license us to hurt them for our benefit, but we can take steps that impose costs upon them IFF doing so is part of a reasonable risk-sharing scheme from which they benefit more than they lose.
Given the non-identity problem, doesn’t the requirement that future people benefit more than they lose allow us to leave future generations with quite a bad situation? eg emitting fossil fuels changes the identities of people in the future, and we could feasibly make the world >10C hotter which would leave lots of tropical countries in a bad state but it would not harm them since they would not have existed had we not emitted
I saw on your page that you have a book on criminal justice reform, and this is also an area Open Phil works in.
Maybe this is asking a bit much for an AMA in case you aren’t already very familiar with their work, but I’ll go ahead anyway:
Do you have any major points of disagreement with Open Phil’s work in this area?
Are there any particular interventions you think Open Phil should fund but haven’t?
Are there any particular grants Open Phil has made that you think stand out as exceptionally good?
More generally, what is your impression of their grant-making in this area?
These are great questions. I’ll need to look into this more and come back to you.
What do you think EAs get wrong about politics?
It depends on the EA. I don’t know if there is a universal trend or generalized flaws. EAs seem so diverse that it’s hard to generalize.
Still, if I generalize based on what I’ve read and whom I’ve talked to, here’s what I see:
1. EAs sometimes forget political economy issues. When they offer a political policy that would work, they forget that it will likely be captured by others who don’t share their values, or that the people running it will possibly be incompetent. In general, for politics, I recommend imagining that your preferred policies will cost 3 times what you expect and deliver 1/3rd the goods. Look at how incompetent the US government is and then remember institutions like this will be in charge.
2. EAs sometimes forget that most other people are not rationalists and do not base their opinions on evidence. The EA message doesn’t sell not because their arguments are bad—their arguments are sound!--but but because good arguments do not persuade people.
Which of your writings (including things like blog posts) do you consider most important for making the world a better place? Assuming many people agreed to deeply consider your arguments on one topic, what would you have them read?
I am tempted to say the stuff on open borders and immigration, because the welfare effects of increased immigration are much higher than anything else I’ve worked on. But realistically, it’s difficult to change people’s minds even when you give them overwhelming evidence.
The work I did with Peter Jaworski on taboo markets seems persuasive to most people who encounter it. If people followed our advice, we’d save tens of thousands of lives per year in the US. But then the issue is that even if you agree with us, it’s not like you can personally legalize kidney markets or other needed markets.
That’s kind of the problem with much of my work. It’s about politics, institutions, and policy. Even when there’s good advice, it’s not like readers have the power to act on it, and the people in power have little incentive to do what’s right.
Do you think that the West’s disastrous experience with Coronavirus (things like underinvesting in vaccines, not adopting challenge trials, not suppressing the virus, mixed messaging on masks early on, the FDA’s errors on testing, and others as enumerated in this thread- or in books like The Premonition) has strengthened, weakened or not changed much the credibility of your thesis in ‘Against Democracy’, that we should expect better outcomes if we give the knowledgeable more freedom to choose policy?
For reasons it might weaken ‘Against Democracy’, it seems like a lot of expert bureaucracies did an unusually bad job because they couldn’t take correction, see this summary post for examples:
https://forum.effectivealtruism.org/posts/dYiJLvcRJ4nk4xm3X#Vax
For reasons it might strengthen the argument, it seems like the institutions that did better than average were the ones that were more able to act autonomously, see e.g. this from Alex Tabarok,
https://marginalrevolution.com/marginalrevolution/2021/06/the-premonition.html
Or this summary
I think it has in some ways strengthened my overall philosophy. I’ve been pushing public choice ideas for a while, and the FDA and CDC seemed to band together this year to make that look right.
Epistocracy should not be confused with technocracy. In a technocracy, a small band of experts get lots of power to manipulate people, nudge them, or engage in social engineering. Many democrats are technocrats—indeed, the people I argue with, like Christiano, Estlund, and so on, are pretty hardcore technocrats who have been in favor of letting alphabet agencies have lots of dictatorial power during this crisis.
Instead, epistocracy is about weighing votes during elections to try to produce better electoral or referendum results. For instance, I favor a system of enlightened preference voting where we let everyone vote but we then calculate what the public would have supported had it been fully informed. And there is decent evidence that if we used it, one thing that would happen is that the resulting voting public would be more aware of the limitations of technocrats and would be more in favor of civil liberties.
Is it possible for you to elaborate more on this or easily provide links to a writeup?
Technocracy to me just means having experts with decision making power or influence, maybe in some institution, presumably with good governance and other checks.
This concept doesn’t immediately lead to thoughts of manipulation, or social engineering.
I’m trying to get educated here—I’m not being contentious, if this is the “Truth” about technocracy and the general mainstream belief, I want to learn about it
Do you mean Thomas Christiano and David Estlund?
I guess related to the above, it seems like the object level argument really depends on some assumptions.
It is just not clear what is being debated here in this subthread to me and I guess to many other readers of your AMA.
Again, is it possible for you to write just a little bit more on this or provide links to something to get novices up to speed?
I don’t want to get caught up in words. We can use new words:
Schmoop: Small bands of experts in bureaucracies get lots of power to unilaterally decide policy which controls citizens, businesses, etc.
Vleep: During elections, use some sort of knowledge-weighted voting system.
I am in favor of Vleep but oppose Schmoop. Lots of democrats favor Schmoop despite opposing Vleep. The recent failures of various regulatory agencies are failures of Schmoop but not Vleep. Against Democracy defends Vleep but not Schmoop.
Thanks for this reply. Would you say then that Covid has strengthened the case for some sorts of democracy reduction, but not others? So we should be more confident in enlightened preference voting but less confident in Garett Jones’ argument (from 10% less democracy) in favour of more independent agencies?
You may want to specify in what sense Western countries’ experience with Covid has been disastrous, in your view (and how you think policy should have been different).
It’s clear the agencies did a bad job, as expected, because they had perverse incentives. For instance, the FDA knows that if it approves something that works badly, it will be blamed. If it doesn’t approve something or it is slow to do so, most people won’t notice the invisible graveyard.
That said, it’s not clear to me whether making this a more open or democratic decision would have made it any better. Citizens are bad at long-term thinking, cost-benefit analysis, seeing the unseen, and so on. You’ve probably seen the surveys showing citizens were systematically misinformed about facts related to COVID and the vaccines.
Ideally we’d structure the bureaucracies’ incentives so that they get punished for the invisible graveyard, but it’s unclear how to do that. I’m really not sure what to do other than trying to streamline the process of approval or requiring that any drug approved in, say, Germany, the UK, Japan, and a few other countries is automatically approved here.
What do you make of Glen Weyl’s argument for a common-ownership self-assessed tax? In general, do you think people have strong rights of self-ownership? Do you think that people have strong ownership rights over the natural world, or do you think there are strong egalitarian restrictions on that? where do you stand on left-libertarianism vs right-libertarianism?
I don’t find arguments for common world ownership very persuasive. It’d take too long to go through all the arguments to explain why here, so I’ll just leave my general worry: Common world ownership means we all have a say on everyone else, and it tends to make the world somewhat zero sum. Every new person is an incursion on my ownership rights and dilutes my claims. I prefer institutional mechanisms that create positive sum games. I really Weyl agrees and thinks his proposal gets around this.
As for self-ownership, I think of course we own ourselves, but this doesn’t do much work philosophically. Here’s an excerpt from a paper I wrote with Bas van der Vossen:
Self-Ownership: Almost Uncontroversial
We own different things in different ways. The bundle of rights that constitutes ownership varies from thing owned to thing owned. The strength of these rights also varies. We can own a cat and a car, but our ownership of the cat—which is real ownership—doesn’t allow us to do as much with it as with our ownership of a car. The way we own cats is different from how we own cars, which is different from how we own a guitar, which is different from how we own a plot of land, and so on.
Morally-speaking, not just legally speaking, the kinds of rights we have over these various things varies. But we really can own each of them. If you prefer to say that ownership is “more extensive” when we have the full bundle of rights with no moral constraints on use, that’s fine. But even if there is more or less extensive ownership, it’s still ownership. Your cat is your cat. You are not allowed to torture it, neglect it, or have sex with it, but that’s not because the cat is partially society’s or anyone else’s. Nor is it because you don’t really own it.
Different kinds of moral arguments—such as Kantian deontological principles, or claims about what it takes to realize certain moral powers, or arguments from a privileged “original position”, or reflections on Strawsonian reactive attitudes, or sophisticated Millian consequentialism, or whatnot—lead us to believe that people have certain rights of exclusion and use over themselves, and possibly as well as some other rights over themselves. And once you see how these rights shape up, you notice that people’s rights over themselves amount to the bundle of rights—to exclude, to use, to modify, etc.—that just so happens to look like what we call “property rights”. It is in this sense, then, that we call people self-owners.
More precisely, we can think of self-ownership as being made up of two variables. On the one hand, self-ownership offers protections (in the form of Hohfeldian claim-rights) against unwanted incursions on one’s person. On the other hand, self-ownership offers the freedom (in the form of Hohfeldian liberties) to use one’s person. Since liberties logically entail the absence of duties (including duties correlating to claim-rights), it follows that the two variables (internal to the idea of self-ownership) can be traded off against each other.
The real question, then, is what mix of the two variables internal to the idea of self-ownership (the claim to exclude and the freedom to use) is morally most desirable. This should be obvious, of course. Bas is a self-owner with the freedom to use his person, but this does not license him in punching Jason in the face. Self-ownership is not best understood by completely maximizing on the freedom-variable, to the complete denial of the exclusion-variable. And, again pace Sobel, self-ownership is also not best understood by maximizing on the exclusion-variable, to the complete denial of the freedom-variable.[i]
Every liberal thinks we each have strong rights to freely use our persons, and exclude others from them. Every liberal thinks that a woman has the right to say no to a demand for sex, on the grounds that it’s her body. In this sense, then, all liberals accept some version of a self-ownership thesis, though many of them would not describe their beliefs as such. (On this point, note that G. A. Cohen thought the self-ownership thesis was the essence of liberalism, not libertarian liberalism specifically (Cohen 2000, p. 252).)
However, in this kind of story, the concept of self-ownership can do almost no work in resolving disputes among liberals. What liberals—both left-liberals and libertarians—disagree about is how people own themselves, not that they own themselves. Our disputes about how to best trade off the two variables internal to that very idea. Criticizing someone’s preferred conception of self-ownership, in other words, is like denying their conclusion. It’s a way of registering disagreement, but not an actual argument against their view.
For instance, consider a variation of Peter Singer’s famous thought experiment (Singer 1972). Imagine you see a toddler drowning in a puddle. Suppose you had bad legs and can’t save the child. Suppose also there is a healthy bystander nearby who could save the child, but who says, “I can’t be bothered. I don’t want my shoes to get muddy.” Now, finally, suppose you have a weapon, and so can force the bystander to save the toddler. May you do so? (Are you justified, or at least excused, in doing so?[ii])
Perhaps you think the answer is yes. Does this somehow invalidate or make trouble for the self-ownership thesis? Consider a somewhat related thought experiment. Suppose a car is barreling towards your child, and the only way to rescue him is to push him out of the way onto someone’s lawn. Or suppose your child is injured, and the only way to get him to the hospital and prevent her death is to hotwire a car. Or suppose you’re stuck in the woods when an unexpected, freak blizzard hits in May, and the only way to survive is to break into someone’s cabin. May you do any of these things? (Are you justified, or at least excused, in doing so?)
These are interesting questions, and virtually everyone agrees that the answer to these questions should be some kind of yes. (They disagree on what precise form that yes takes.) But we’ve never met anyone who, upon saying that because, in cases like this, you may be excused or justified in temporarily overriding others’ property rights (with the stipulation that you might owe them compensation in some cases), that property is an inherently problematic concept and that private property doesn’t exist. On the contrary, there are hundreds of years’ worth of common-law cases dealing with such issues, which are meant to show just what property amounts to, not that property doesn’t exist.
Again, something strange is going on with the critics of the self-ownership thesis. In order to show the thesis is incoherent or problematic, they have to make the position out to be something that no one would sensibly defend, and must use arguments that no one would find compelling against other forms of property. The compliment they pay to libertarians is that they straw man the position in order to critique it.
[i] For good discussion of this point, see Mack (2015)
[ii] The difference between justification and excuse is as follows: When a person is justified in doing X, the action is right. When a person is excused, the action is wrong, but her blameworthiness is reduced as a result of duress. So, for instance, killing a murderous intruder in self-defense is a justified, while killing another person because a gunman coerced you into doing it on pain of your own death may be excusable.
I think there is a typo in the bit about Weyl?
The interesting thing about the Weyl proposal is that it is an alternative to private property that could potentially produce better social outcomes from a consequentialist/utilitarian/social welfare point of view. The reason for this is that it overcomes the tragedy of the anti-commons, such that holdouts can extract rents, sometimes at huge expense to society. If Weyl’s proposal would produce better outcomes, would you be in favour of it
Do you think that any political or institutional reform projects could be highly impactful? What would you recommend—would it be Garret Jones 10% less democracy-type stuff or something more radical?
I work on stuff I think would be high impact if leaders acted on it: immigration liberalization, criminal justice reform, kidney and organ markets.
Jones is probably right but he’s not calling for much reform. He’s trying to get readers to not go more radically democratic than they already are.
Are there any small-scale, experiments with Epistocracy you think that countries or other jurisdictions should try as a first stab at testing this form of government? What would you like to see and where?
I’d like to try enlightened preference voting in Denmark or New Hampshire.
How it works:
1. Everyone votes for their preferred thing (whatever is being voted on).
2. Everyone somehow registers their demographic data.
3. Everyone takes a 30-question quiz on basic political information.
With 1-3, we then estimate what a demographically identical public would have voted for if it had gotten a perfect score on the quiz. We do that instead of what the majority/plurality actually voted for.
There are lots of details here I’m not getting into, but that’s what I’d want to try. No one’s done it to actually decide policy, but researchers have been doing this in labs for a long time with good results.
Are you worried about governments using the quizzes to favour certain groups regardless of political knowledge? Kind of like gerrymandering. Who will decide what answers are correct? Or do you expect that this would only be abused if they were going to do far worse anyway?
Also, maybe demographic data should include stuff like health conditions and personality, although intellectual disabilities may prevent people from scoring well in the first place, and intelligence/knowledge may correlate with actual interests, i.e. what would be good for a person. Has there been much written about this? I guess we’d hope the more informed would look after the less informed? We’re already hoping for this now with representative democracy.
Which areas of philosophy are currently neglected—research-wise and teaching-wise—and which get too much attention? How should philosophy research and teaching change? Are there structures or incentives that make philosophers less likely to focus on the most important topics? How could that change?
The public reason field seems to have all the makings of a degenerate research project. It’s a bunch of people debating fine points of definition and who clearly don’t believe in what they say. Take, for instance, Gerald Gaus. He theorized about diversity of thought because he hated it; he didn’t respect anyone other than those that agreed with him and did his philosophy his way. He wanted disciples. He was willing to sabotage his own department to make sure he got his way in hiring acolytes. Yet, oddly, public reason theorists who say they care about public justification never bother to learn what the public thinks or try to justify institutions and policies to them. In my view, Peter Singer cares much more about public justification than Rawls, Freeman, Gaus, Weithman, Benn, Quong, or Vallier. Singer provides public reasons to advocate his ideas; they don’t.
In general, political philosophy still seems to reward people for working on very abstract topics that don’t really matter. I’m not sure why. PPE-style philosophy and non-ideal theory is much harder and more cognitively demanding than definition-spinning and ideal theory, because you have to know more and have to deal with all the problems of human nature. Yet philosophy rewards the easier to do work over the harder stuff.
I wouldn’t worry about fixing philosophy. Just do good work and don’t worry about it. However, in general, I think there is way too much public investment in philosophy. Philosophy classes do not deliver the promised goods, so the money should be used elsewhere. If the field were cut in half and the money went to reducing the cost of college, that’d be a good start.
I’d say in general political philosophy suffers from the fact that most political philosophers know little about political science, sociology, or economics. They think they can reason about the justice of institutions without knowing how these institutions work or why they exist. In principle, they could, but in practice, this just means that they sneak in mistaken empirical assumptions.
What do you think are EAs getting wrong and why?
EAs are bad at marketing to non-EAs.
Illustrative anecdote: A few years ago, I was in charge of our first year seminars at Georgetown. Every year, we pick a non-profit partner who gives the students a real problem that non-profit needs to have fixed. The students act as consultants to offer solutions in a case competition. The winners usually intern with the organization afterward to implement their ideas. I picked a major EA charity. They said, “We need to figure out how to raise money from more diverse sources other than EA people. Almost all of our money comes from EA utilitarians and libertarians. How can we appeal to more people without diluting our message or using non-evidence-based forms of marketing?” During their presentation, I asked them, “Look, if you are evidence-based, what about the strong evidence that evidence-based marketing doesn’t appeal to the majority of donors? If EA is about taking effective means to one’s ends, doesn’t that mean sometimes using non-EA arguments and forms of persuasion?”
What do you make of Rob Wiblin’s post on the value of voting—https://80000hours.org/articles/is-voting-important/
If voting is serious business, we need to treat it as such.
Right before the US 2020 election, Gelman argues that PA voters have a 1 in 8.8 million chance of breaking a tie. TX was 1 in 100 million. DC 1 in 240 trillion.
Showing some votes have high expected utility means showing those same votes can have high expected disutility.
It’s weird that Wilbin and MacAskill will be like, “Hey, careful! Before you donate $50, make sure you are doing good rather than wasting the money or worse, harming people. We are beset by biases that make us donate badly and we need to be careful.” But then when it comes to voting, they often advise people to just vote, or to guesstimate effects, when in fact the empirical work shows that are much more biased and terrible at judging politics than almost anything else.
Most people do not know enough to vote well, and voting well is hard. Believing it is easy is itself evidence of bias—that’s what the political psych shows. (Partisans downplay difficulty and think they are obviously right.) So if some people’s votes matter, rather than advising them to vote, period, we should advise them to be good EAs and be very careful about their votes.
I answered this before and it didn’t post. I’ll try again.
If voting matters, we have to treat it like matters.
EAs warn people, “Don’t just donate $500! Be careful. Learn what works and what doesn’t. Make sure you give to an effective charity rather than an ineffective or harmful one. Be aware that you are biased to make bad choices!”
But all that applies to voting. If voting can be like donating $50,000, it can also be like robbing a charity of $50,000. But oddly I see EAs telling everyone to vote and telling them to guesstimate, even though our evidence is that people are much worse at judging politics than charities, and even though guestimating a presidential candidate is orders of magnitude more difficult than judging a charity.
Is it sufficient for it to be good to vote for EAs to be better than the median voter? (which I think is probably true.)
You have written about the importance of economic growth—what do you make of Lant Pritchett’s arguments on that topic?
Economic growth is vital. Here’s why:
PPP-adjusted GDP/capita is about $16,000 right now. Imagine I waved a magic wand that magically redistributed all of this in the form of consumable income, with equal shares for all. That’d mean everyone on earth lives on $16,000 a year. Better than what we currently have for most people, but, still, a lot worse than what we see in, say, Appalachian USA.
But this is misleading because this isn’t even possible. Lots of that GDP is in the form of government or capital expenditures. We need some money not to be consumed but to be invested in public goods, capital, etc., so we can produce next year. Empirically, maybe only about half of that at most could in principle be consumed as income. So, perfect egalitarianism gets us to maybe $8000 per person right now. Still better than what many experience, but not real security or comfort.
Growth > equality when it comes to welfare for this reason. We need to make more pie so that everyone has enough; right now there is not enough pie for everyone to have a good slice, even if we gave everyone an equal slice.
Do you have a place where you’ve addressed critiques of Against Democracy that have come out after it was published, like the ones in https://quillette.com/2020/03/22/against-democracy-a-review/ for example?
Most liberals and libertarians identify with non-consequentialist ethics. Consequentialism is (sometimes?often?) seen as an antagonist or threat to liberalism or libertarianism. Sometimes, I worry that the strong connection of Effective Altruism to consequentialist ethical positions serves as a hindrance in popularizing it among modern liberals and libertarians.
Do you agree with this assessment? Do you think this can change? In what ways would you like to see consequentialists engage with liberal or libertarian ideas? In what ways can we make liberals or libertarians engage more with consequentialist ideas?
Consequentialist arguments favor liberalism because in practice, it works and other things don’t. Most of my arguments for institutions are consequentialist. Economic justifications are consequentialists.
I think consequentialists get stuck thinking liberalism fails because, sitting in an armchair, they can imagine giving unilateral power to someone to break by liberal rules and then imagine this results in more good. But in practice, that power rarely works as intended, and it gets captured by people who use it for bad ends or use it incompetently. So, I think consequentialism + robust political economy → liberalism.
Can you address these concerns about Open Borders?
https://www.forbes.com/sites/modeledbehavior/2017/02/26/why-i-dont-support-open-borders
Open borders is in some sense the default, and states had to explicitly decide to impose immigration controls. Why is it that every nation-state on Earth has decided to impose immigration controls? I suspect it may be through a process of cultural evolution in which states that failed to impose immigration controls ceased to exist. (See https://en.wikipedia.org/wiki/Second_Boer_War for one example that I happened to come across recently.) Do you have another explanation for this?
What do you think EAs get wrong about economics?
Many EAs are smart neoliberals, but they don’t pay sufficient attention to government failure. They imagine running a bureaucracy the way they want, as if it were staffed by EAs, rather than staffed by regular people with regular foibles.
You mention that part of your Templeton Foundation grant will fund “student research in effective altruism”.
Do you have any particular research topics in mind?
Have you been in touch with organizations like the EA Infrastructure Fund that also fund research in this area? (There are a few — I’d be happy to connect you!)
Nothing in particular. I will leave it up to students by having a call for research projects very soon. I think students can come up with really cool ideas on their own—indeed, a few have already pitched things to me that are worth funding. I will look into that group. Thanks for offering—I may take you up on it.
How likely do you think it is that your writing against voting will affect the results of a government election, at any level of government anywhere? :P
If it did, what kind of difference would you expect from it, if any?
I have an unusually high amount of influence and public uptake. I am not as famous as Singer or Sandel, but I get more attention than most.
Despite that, I expect not to have much influence on actual policy or behavior. It’d be surprising if I did have much.
There’s a long shot game I’m sort of playing: You get new ideas out there. They spread around into the public discourse. People know of the arguments and ideas even if they don’t know the source. Then, when a crisis occurs, maybe 20-50 years down the road, they might be willing to experiment with your ideas to fix the crisis. That seems to be what happens with most big ideas in political philosophy that have any traction. It takes decades for the philosopher to influence outcomes, and when they do, people don’t even know the philosopher they are responding to. Maybe my stuff on what’s wrong with democracy and how we can improve it will be like that. Against Democracy has had a lot of success, so it’s possible. But I would think it’s more likely than not that it won’t do anything despite that.
Thanks for asking!
1. I make a routine of writing for four hours a day, every working day, before I do other kinds of work. Answering emails, refereeing papers and books, attending meetings, preparing classes, and the like, require less brain power, so I have them go last. If you let them go first, they tend to eat up time and energy that is better used on research.
2. I stop working no later 5 pm unless I’m away giving a talk. Work is a 9-5 job.
3. I work from home so I cut down commute times.
4. Luckily for me, my fellow band members are also advanced musicians, so we usually rehearse a new song only once (and sometimes not at all) before we play it live. We had maybe twenty gigs last year (and could have done more but for our schedules), but we rehearsed twice.
5. My job requires less teaching work than most others’. In a normal year, I spend only about 90 hours a year (3 undergraduate classes) inside a classroom. So I have more freedom and time to work on publishing than most academics. I literally spent more time last academic year helping a theater teacher friend by playing guitar for her production of Mean Girls than I did teaching in a classroom.
What do you think of the proposals in Longtermist Institutional Reform? If you’re supportive, what should happen at the current margin to push them forward?
What specific reforms do you think are most worthy of an additional dollar?
I realized after reading this question that most of the reforms I work on save dollars rather than cost them: Eliminate cash bail, eliminate career prosecutors and instead have prosecutors and public defenders be the same people from the same office, eliminate SWAT teams in most towns and federal distribution of military equipment to the police, open borders, require the FDA to auto-approve any drug approved in certain other countries, etc. Most of these things are free.
Sorry I guess I meant, “What specific reforms do you think are most worthy of an additional dollar of advocacy for them?”
What ethical views do you hold that you disagree most on with other ethicists?
That democracy is good in itself. I see it as a tool for producing good outcomes; nothing more.
I view democracy as a system in which some people push other people around. It’s not really equal and it cannot be made equal. Even if it were equal, it would still be a system in which some people push other people around.
I also deny that an unjust policy can be rendered just by coming about the right way. I don’t believe there is such a thing as “legitimacy” which enables governments to do something unjust because of how they decided. For me, that makes it sound like morality has an absurd loophole: Hey, if you want to violate rights or hurt people, it’s okay, so long as you decide to do it through a convoluted process.
How did you get into effective altruism?
By taking economics classes. Really, from Henry Hazlitt’s Economics in One Lesson in high school, which repeated Bastiat’s idea that you look not merely at the short term consequences to an immediate group, but the long-term and less obvious consequences to everyone.
I see EA as, in effect, microeconomics applied to giving. I suspect this is why so many Marxists hate it!
What are some things that convinced you the Ethics Project was working well for its intended purpose? Are there any particular student stories, or things students said, that really stick with you as examples of the Project fulfilling its goals?
The Ethics Project requires students to deliberate ahead of acting, then act, and then reflect on what they did. Instead of role-playing problems, they deal with real-life problems first-hand. Educational psych lit says that adult learners learn by doing. The moral blind spots lit says that people learn to behave better by practicing reflecting on their strategic decisions before acting.
Students routinely say it was the most significant learning experience they had. That’s validating.
I like it also because it shakes students of naïveté. They tend to thing social change and making a difference is easy. But then they have to do this activity, and they get to see first-hand how red-tape, free riding, distrust, and all sorts of other obstacles stop them. But they get a chance to overcome them.
If you had to participate in the Ethics Project yourself, doing something good with $1000, what might you end up doing?
The more challenging version: Assume you’ll spend at least as much time as the average MBA student, and that you’ll be aiming to do more good than any of the students in the current round of the project.
A few ideas:
1. Spend the money replacing certain water heater elements at Georgetown. Some students did this for a few dorms, but they could do it for others. $200 can save the university $10s of thousands per year. Indeed, it’s bizarre the university didn’t copy the students’ project.
2. Help people start a small business in a poor country. $1000 can get one off the ground.
3. Do a fundraiser. $1000 can be turned into $20,000 which can be given to an effective charity. Federal rules prohibit direct donations but the $1000 can be turned into more money that can be donated.
What are your views on metaethics and normative ethics (consequentialism, deontology, virtue ethics, a mix of them; theory of value like hedonism, preference view; any other specifics)?
Either moral realism or moral nihilism. Everything else is a joke. Morality is either real or bullshit. Every in-between theory ends up being a disguised form of one of these or is incoherent.
As for moral theory, I see moral theories as tools. Consider an analogy: Quantum mechanics and relativistic mechanics are, as of now, incompatible with each other. They describe the world in incompatible ways. Sociology, psychology, and economics describe human nature and behavior with incompatible models. But when we want to understand the world, we use different models from different theories or even different fields for different purposes, despite these models not all being compatible with one another. Why can’t moral theory be the same? For some questions, deontology provides the most illuminating model. For others, virtue theory does. We don’t have one good master moral theory, but we don’t have one good master social scientific theory or theory of physics either.
A couple questions: What, if any, personal donations do you make?
Would you welcome a philanthropist who came into the political philosophy sphere and urged top philosophers like yourself, Chris Freiman, Michael huemer, David Schmidtz, to join together and write a “master argument” (attempting to have the same effect in philosophy that Rawls Theory of Justice had) to advance the neoclassical liberal/anarchist brand of political philosophy? do you think this project would be worth funding? (I.e. do you think extra funding would help you and other philosophers produce a new theory of justice for the 21st century? Or: Would funding make any difference?)
I donate to GiveWell charities, like Against Malaria or Evidence Action. I also donate to places with whom I have a relationship and owe some degree of reciprocity—that is, I’ll give a small amount to my alma mater. But I regard my duties of beneficence as discharged by the effective donations, while the other donations are about transitive reciprocity rather than beneficence per se.
As for funding, nah, we don’t need more research funding. We’re all well-funded and can do what we do without big money. Indeed, even the $2.1 million I got from Templeton is not for me and my research, but to help others, and to do projects.
How tractable do you feel are some of your more taboo political ideas?
Here’s what I’ve noticed when I give public talks:
1. People tend to agree that kidney sales should be allowed.
2. They tend to become much more in favor of open borders than they were before. They might not go full border liberal but they favor increased immigration.
3. They do not endorse epistocracy but they recognize democracy has serious built-in problems and stop saying we can fix it by doing “real democracy”.
Lots of people are talking about epistocracy. It gets frequent mentions in op-eds, magazines, etc. The idea is out there and people are mulling it over. Maybe someone will act on it in 20-50 years.
I think it could be good to specify which ideas you’re referring to.
What can an EA academic do to improve the incentives in the research side of academia? To help reward quality or even positive impact?
I wrote a whole book about perverse incentives in academia, but I am not sure there is much we can do other than do more EA work.
At the end of the day, researchers try to publish in the best journals they can because that’s where the money and prestige is. They will tend to work on whatever topics are sexy because that’s what it takes to publish.
Why do some topics become sexy and others not? For instance, why is it something that doesn’t matter at all—such as splitting the millionth hair on the definition of some term in public reason theory—will get in PPA, but if Peter Singer writes an argument about how to actually save a million lives, it won’t? I don’t know.
But the best we can do is do the work we think is valuable and make the best case for it. If we’re lucky, it’ll become sexy and others will have an incentive to do more of it too.
What advice do you have for teaching EA courses in an academic context (esp. philosophy)? Besides the Ethics projects, which parts of your classes on the topic do you think are most successful or most popular?
Definitely do the Ethics Project! Indeed, if you want to do it, hit me up! I have something like $20,000 a year to seed it at other colleges.
Other things I do:
1. Teach incentives and perverse incentives.
2. Teach moral psychology and the psychology behind giving behavior. (It’s depressing but teach it anyway.)
3. Ask students to write a critique of a charity or NGO. Have them identify what a charity is doing badly, why they are messing up, what perverse incentives or psych mechanisms cause it, and what they could do to change the culture or incentives to produce better outcomes.
4. Have students write an op-ed encouraging donations to a charity.
5. Have students do the giving game. I tell students I will donate $500 of my own money. They break into groups and make presentations defending the charity of their choice. I tell them not to use GiveWell charities because the work is already done for them. I then donate $500 to the best group’s choice.
P.S. Regarding funding, we can give the money to any other US college or university. We’ll have to figure out the mechanics—it may be that we can directly donate it to your school to use it, or, more likely, you’d have your class do it and we’d pay for students’ expenses.
How do you think about the ‘demandingness of morality’ (i.e. what percentage of your income you should give; the amount of time you spend working on high-impact topics)? If you can meet your obligations of beneficence, what is the underlying principle or reason that explains where that falls?
What is your understanding on whether decision makers and the general public are more in favour of a political system which implies certain nice values (democracy implies all should have a political voice and can meaningfully contribute to policy discussions) versus being open to one that might limit participation in favour of efficacy?
I’m particularly curious as to whether you believe people are becoming more open to the latter in the context of technological developments and increasing complexity of prominent public policy issues which means that the layperson (myself include) is becoming less able to deeply understand and contribute to a wide variety of policy debates
My favorite book on human nature is The Elephant in the Brain by Simler and Hanson. I think they provide overwhelming evidence that people are mostly motivated by ignoble, self-centered motives. This explains why institutions are so dysfunctional or inefficient: Politics is not about policy, charity is not about helping, medicine is not about healing, education is not about learning. Once you read their book and see their evidence, you realize that people will generally do what sounds nice rather than what works. And this stuff is independent of, say, the political irrationality literature which says voters reason badly because politics is a commons.
The odd thing I find is that people are democratic but technocratic, while I am epistocratic but anti-technocratic. Like they want equal voting rights, but then concentrate power in the hands of bureaucrats with perverse incentives, while I want enlightened preference voting but unconcentrated power.
I knew EA was full of technocrats. Glad I left the movement before they started showing themselves. Oppression may be sustainable for a couple thousand years...but what about longtermism? What about human flourishing? This is just a run of the mill conservative take wrapped up in slightly fancier language. Why don’t we simply educate people and get money out of politics?
A lot of people seem to hate EA because they come convinced they know the solutions to this and that, but EA tells them those solutions don’t work and stuff they reject works.
For instance, if “neoliberal” means anything, it means kind of mixed economy with lots of liberal markets but with various regulations and welfare programs. Empirically, this seems to work better than anything else we’ve tried—by a lot! But lots of people want to reject that a priori and they hate how comfortable EAs are with doing what...works.