The public reason field seems to have all the makings of a degenerate research project. It’s a bunch of people debating fine points of definition and who clearly don’t believe in what they say. Take, for instance, Gerald Gaus. He theorized about diversity of thought because he hated it; he didn’t respect anyone other than those that agreed with him and did his philosophy his way. He wanted disciples. He was willing to sabotage his own department to make sure he got his way in hiring acolytes. Yet, oddly, public reason theorists who say they care about public justification never bother to learn what the public thinks or try to justify institutions and policies to them. In my view, Peter Singer cares much more about public justification than Rawls, Freeman, Gaus, Weithman, Benn, Quong, or Vallier. Singer provides public reasons to advocate his ideas; they don’t.
In general, political philosophy still seems to reward people for working on very abstract topics that don’t really matter. I’m not sure why. PPE-style philosophy and non-ideal theory is much harder and more cognitively demanding than definition-spinning and ideal theory, because you have to know more and have to deal with all the problems of human nature. Yet philosophy rewards the easier to do work over the harder stuff.
I wouldn’t worry about fixing philosophy. Just do good work and don’t worry about it. However, in general, I think there is way too much public investment in philosophy. Philosophy classes do not deliver the promised goods, so the money should be used elsewhere. If the field were cut in half and the money went to reducing the cost of college, that’d be a good start.
Jason Brennan
AMA: Jason Brennan, author of “Against Democracy” and creator of a Georgetown course on EA
A lot of people seem to hate EA because they come convinced they know the solutions to this and that, but EA tells them those solutions don’t work and stuff they reject works.
For instance, if “neoliberal” means anything, it means kind of mixed economy with lots of liberal markets but with various regulations and welfare programs. Empirically, this seems to work better than anything else we’ve tried—by a lot! But lots of people want to reject that a priori and they hate how comfortable EAs are with doing what...works.
People tomorrow matter. We cannot simply imposes costs upon them. As Feinberg argued long ago, if I left a time bomb underground that would explode in 200 years, when it kills people, I am a murderer.
Still, we have good reason to think overall that people in the future will be much better off than we are. That doesn’t license us to hurt them for our benefit, but we can take steps that impose costs upon them IFF doing so is part of a reasonable risk-sharing scheme from which they benefit more than they lose.
EAs are bad at marketing to non-EAs.
Illustrative anecdote: A few years ago, I was in charge of our first year seminars at Georgetown. Every year, we pick a non-profit partner who gives the students a real problem that non-profit needs to have fixed. The students act as consultants to offer solutions in a case competition. The winners usually intern with the organization afterward to implement their ideas. I picked a major EA charity. They said, “We need to figure out how to raise money from more diverse sources other than EA people. Almost all of our money comes from EA utilitarians and libertarians. How can we appeal to more people without diluting our message or using non-evidence-based forms of marketing?” During their presentation, I asked them, “Look, if you are evidence-based, what about the strong evidence that evidence-based marketing doesn’t appeal to the majority of donors? If EA is about taking effective means to one’s ends, doesn’t that mean sometimes using non-EA arguments and forms of persuasion?”
I have an unusually high amount of influence and public uptake. I am not as famous as Singer or Sandel, but I get more attention than most.
Despite that, I expect not to have much influence on actual policy or behavior. It’d be surprising if I did have much.
There’s a long shot game I’m sort of playing: You get new ideas out there. They spread around into the public discourse. People know of the arguments and ideas even if they don’t know the source. Then, when a crisis occurs, maybe 20-50 years down the road, they might be willing to experiment with your ideas to fix the crisis. That seems to be what happens with most big ideas in political philosophy that have any traction. It takes decades for the philosopher to influence outcomes, and when they do, people don’t even know the philosopher they are responding to. Maybe my stuff on what’s wrong with democracy and how we can improve it will be like that. Against Democracy has had a lot of success, so it’s possible. But I would think it’s more likely than not that it won’t do anything despite that.
I am a bit split on the data from polling younger people. Quite a bit of that data shows that they prefer the word/label “socialism” to “capitalism”. If you ask them whether socialism is better than capitalism, they say yes. But if you give them more specific things, such as asking whether the government should own all productive property or whether we should have markets, they tend to reject socialism in favor of capitalism, though not by a huge amount. Also, you see the memes going around where people use “socialism” to refer not to socialism, but to government-funded public goods and welfare policies.
Still, if people are confused, then demagogues can take advantage of them or they might end up voting for the wrong things.
I think the case for capitalism must be made not merely because some form of it works better than the alternatives, but because the empirics on immigration show that open borders with global market economies is the best and most effective solution to world poverty. Immigration beats both intra- and international redistribution in terms of its distributional and welfare effects.
However, socialism and open borders don’t mix well, because once you turn a society into a giant workers’ co-op, adding new members always comes at the expense of the current members.
Here’s what I’ve noticed when I give public talks:
1. People tend to agree that kidney sales should be allowed.
2. They tend to become much more in favor of open borders than they were before. They might not go full border liberal but they favor increased immigration.
3. They do not endorse epistocracy but they recognize democracy has serious built-in problems and stop saying we can fix it by doing “real democracy”.
Lots of people are talking about epistocracy. It gets frequent mentions in op-eds, magazines, etc. The idea is out there and people are mulling it over. Maybe someone will act on it in 20-50 years.
By taking economics classes. Really, from Henry Hazlitt’s Economics in One Lesson in high school, which repeated Bastiat’s idea that you look not merely at the short term consequences to an immediate group, but the long-term and less obvious consequences to everyone.
I see EA as, in effect, microeconomics applied to giving. I suspect this is why so many Marxists hate it!
Economic growth is vital. Here’s why:
PPP-adjusted GDP/capita is about $16,000 right now. Imagine I waved a magic wand that magically redistributed all of this in the form of consumable income, with equal shares for all. That’d mean everyone on earth lives on $16,000 a year. Better than what we currently have for most people, but, still, a lot worse than what we see in, say, Appalachian USA.
But this is misleading because this isn’t even possible. Lots of that GDP is in the form of government or capital expenditures. We need some money not to be consumed but to be invested in public goods, capital, etc., so we can produce next year. Empirically, maybe only about half of that at most could in principle be consumed as income. So, perfect egalitarianism gets us to maybe $8000 per person right now. Still better than what many experience, but not real security or comfort.
Growth > equality when it comes to welfare for this reason. We need to make more pie so that everyone has enough; right now there is not enough pie for everyone to have a good slice, even if we gave everyone an equal slice.
Consequentialist arguments favor liberalism because in practice, it works and other things don’t. Most of my arguments for institutions are consequentialist. Economic justifications are consequentialists.
I think consequentialists get stuck thinking liberalism fails because, sitting in an armchair, they can imagine giving unilateral power to someone to break by liberal rules and then imagine this results in more good. But in practice, that power rarely works as intended, and it gets captured by people who use it for bad ends or use it incompetently. So, I think consequentialism + robust political economy → liberalism.
These are great questions. I’ll need to look into this more and come back to you.
I am tempted to say the stuff on open borders and immigration, because the welfare effects of increased immigration are much higher than anything else I’ve worked on. But realistically, it’s difficult to change people’s minds even when you give them overwhelming evidence.
The work I did with Peter Jaworski on taboo markets seems persuasive to most people who encounter it. If people followed our advice, we’d save tens of thousands of lives per year in the US. But then the issue is that even if you agree with us, it’s not like you can personally legalize kidney markets or other needed markets.
That’s kind of the problem with much of my work. It’s about politics, institutions, and policy. Even when there’s good advice, it’s not like readers have the power to act on it, and the people in power have little incentive to do what’s right.
Definitely do the Ethics Project! Indeed, if you want to do it, hit me up! I have something like $20,000 a year to seed it at other colleges.
Other things I do:
1. Teach incentives and perverse incentives.
2. Teach moral psychology and the psychology behind giving behavior. (It’s depressing but teach it anyway.)
3. Ask students to write a critique of a charity or NGO. Have them identify what a charity is doing badly, why they are messing up, what perverse incentives or psych mechanisms cause it, and what they could do to change the culture or incentives to produce better outcomes.
4. Have students write an op-ed encouraging donations to a charity.
5. Have students do the giving game. I tell students I will donate $500 of my own money. They break into groups and make presentations defending the charity of their choice. I tell them not to use GiveWell charities because the work is already done for them. I then donate $500 to the best group’s choice.
Many EAs are smart neoliberals, but they don’t pay sufficient attention to government failure. They imagine running a bureaucracy the way they want, as if it were staffed by EAs, rather than staffed by regular people with regular foibles.
I think it has in some ways strengthened my overall philosophy. I’ve been pushing public choice ideas for a while, and the FDA and CDC seemed to band together this year to make that look right.
Epistocracy should not be confused with technocracy. In a technocracy, a small band of experts get lots of power to manipulate people, nudge them, or engage in social engineering. Many democrats are technocrats—indeed, the people I argue with, like Christiano, Estlund, and so on, are pretty hardcore technocrats who have been in favor of letting alphabet agencies have lots of dictatorial power during this crisis.
Instead, epistocracy is about weighing votes during elections to try to produce better electoral or referendum results. For instance, I favor a system of enlightened preference voting where we let everyone vote but we then calculate what the public would have supported had it been fully informed. And there is decent evidence that if we used it, one thing that would happen is that the resulting voting public would be more aware of the limitations of technocrats and would be more in favor of civil liberties.- 5 Apr 2022 19:56 UTC; 5 points) 's comment on Ideal governance (for companies, countries and more) by (LessWrong;
I donate to GiveWell charities, like Against Malaria or Evidence Action. I also donate to places with whom I have a relationship and owe some degree of reciprocity—that is, I’ll give a small amount to my alma mater. But I regard my duties of beneficence as discharged by the effective donations, while the other donations are about transitive reciprocity rather than beneficence per se.
As for funding, nah, we don’t need more research funding. We’re all well-funded and can do what we do without big money. Indeed, even the $2.1 million I got from Templeton is not for me and my research, but to help others, and to do projects.
I’d like to try enlightened preference voting in Denmark or New Hampshire.
How it works:
1. Everyone votes for their preferred thing (whatever is being voted on).
2. Everyone somehow registers their demographic data.
3. Everyone takes a 30-question quiz on basic political information.
With 1-3, we then estimate what a demographically identical public would have voted for if it had gotten a perfect score on the quiz. We do that instead of what the majority/plurality actually voted for.There are lots of details here I’m not getting into, but that’s what I’d want to try. No one’s done it to actually decide policy, but researchers have been doing this in labs for a long time with good results.
If voting is serious business, we need to treat it as such.
Right before the US 2020 election, Gelman argues that PA voters have a 1 in 8.8 million chance of breaking a tie. TX was 1 in 100 million. DC 1 in 240 trillion.
Showing some votes have high expected utility means showing those same votes can have high expected disutility.
It’s weird that Wilbin and MacAskill will be like, “Hey, careful! Before you donate $50, make sure you are doing good rather than wasting the money or worse, harming people. We are beset by biases that make us donate badly and we need to be careful.” But then when it comes to voting, they often advise people to just vote, or to guesstimate effects, when in fact the empirical work shows that are much more biased and terrible at judging politics than almost anything else.
Most people do not know enough to vote well, and voting well is hard. Believing it is easy is itself evidence of bias—that’s what the political psych shows. (Partisans downplay difficulty and think they are obviously right.) So if some people’s votes matter, rather than advising them to vote, period, we should advise them to be good EAs and be very careful about their votes.
It depends on the EA. I don’t know if there is a universal trend or generalized flaws. EAs seem so diverse that it’s hard to generalize.
Still, if I generalize based on what I’ve read and whom I’ve talked to, here’s what I see:
1. EAs sometimes forget political economy issues. When they offer a political policy that would work, they forget that it will likely be captured by others who don’t share their values, or that the people running it will possibly be incompetent. In general, for politics, I recommend imagining that your preferred policies will cost 3 times what you expect and deliver 1/3rd the goods. Look at how incompetent the US government is and then remember institutions like this will be in charge.
2. EAs sometimes forget that most other people are not rationalists and do not base their opinions on evidence. The EA message doesn’t sell not because their arguments are bad—their arguments are sound!--but but because good arguments do not persuade people.