I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
zdgroff
Two Nice Experiments on Democracy and Altruism
Thanks! Helpful follow-ups.
On the first point, I think your intuition does capture the information aversion here, but I still think information aversion is an accurate description. Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you’ll pay more than for a bet that pays $X if a random ball is red. The only difference between these situations is that you have more information in the latter: you know the color to match is red. That makes you less willing to pay. And there’s no obvious reason why this information aversion would be something like a useful heuristic.
I don’t quite get the second point. Commitment doesn’t seem very relevant here since it’s really just a difference in what you would pay for each situation. If one comes first, I don’t see any reason why it would make sense to commit, so I don’t think that strengthens the case for ambiguity aversion in any way. But I think I might be confused here.
Yeah, that’s the part I’m referring to. I take his comment that expectations are not random variables to be criticizing taking expectations over expected utility with respect to uncertain probabilities.
I think the critical review of ambiguity aversion I linked to us sufficiently general that any alternatives to taking expectations with respect to uncertain probabilities will have seriously undesirable features.
Thanks for writing this. I think it’s very valuable to be having this discussion. Longtermism is a novel, strange, and highly demanding idea, so it merits a great deal of scrutiny. That said, I agree with the thesis and don’t currently find your objections against longtermism persuasive (although in one case I think they suggest a specific set of approaches to longtermism).
I’ll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they’re constant. To me a charitable version of Greaves and MacAskill’s argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don’t think they’d claim the probabilities are certain.
Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not “take expectations” over probabilities in the way I’ve described. The problem with this is that (a) alternatives to taking expectations over probabilities have been explored in the literature, and they have a lot of undesirable features; and (b) alternatives to taking expectations over probabilities do not necessarily reject longtermism. I’ll discuss (b), since it involves providing an example for (a).
(b) In economics at least, Gilboa and Schmeidler (1989) propose what’s probably the best-known alternative to EU when the probabilities are uncertain, which involves maximizing expected utility for the prior according to which utility is the lowest, sort of a meta-level risk aversion. They prove that this is the optimal decision rule according to some remarkably weak assumptions. If you take this approach, it’s far from clear you’ll reject longtermism: more likely, you end up with a sort of longtermism focused on averting long-term suffering, i.e. focused on maximizing expected value according to the most pessimistic probabilities. There’s a bunch of other approaches, but they tend to have similar flavors. So alternatives on EU may agree on longtermism and just disagree on the flavor of it.
(a) Moving away from EU leads to a lot of problems. As I’m sure you know given your technical background, EU derives from a really nice set of axioms (The Savage Axioms). Things go awry when you leave it. Al-Najjar and Weinstein (2009) offer a persuasive discussion of this (H/T Phil Trammell). For example, non-EU models imply information aversion. Now, a certain sort of information aversion might make sense in the context of longtermism. In line with your Popper quote, it might make sense to avoid information about the feasibility of highly-specific future scenarios. But that’s not really the sort of information non-EU models imply aversion to. Instead, they imply aversion to info that would shift you toward the option that currently has a lot of ambiguity about it because you dislike it based on its current ambiguity.
So I don’t think we can leave behind EU for another approach to evaluating outcomes. The problems, to me, seem to lie elsewhere. I think there are problems with the way we’re arriving at probabilities (inventing subjective ones that invite biases and failing to adequately stick to base rates, for example). I also think there might be a point to be made about having priors on unlikely conclusions so that, for example, the conclusion of strong longtermism is so strange that we should be disinclined to buy into it based on the uncertainty about probabilities feeding into the claim. But the approach itself seems right to me. I honestly spent some time looking for alternative approaches because of these last two concerns I mentioned and came away thinking that EU is the best we’ve got.
I’d note, finally, that I take the utopianism point well and wold like to see more discussion of this. Utopian movements have a sordid history, and Popper is spot-on. Longtermism doesn’t have to be utopian, though. Avoiding really bad outcomes, or striving for a middling outcome, is not utopian. This seems to me to dovetail with my proposal in the last paragraph to improve our probability estimates. Sticking carefully to base rates and things we have some idea about seems to be a good way to avoid utopianism and its pitfalls. So I’d suggest a form of longtermism that is humble about what we know and strives to get the least-bad empirical data possible, but I still think longtermism comes out on top.
This this this! As a PhD student in economics, I’m always pushing for the same thing in academia. People usually think saying nice job is useless, because it doesn’t help people improve. It’s important for people to know what they’re doing right, though. It’s also important for people to get positive reinforcement to keep going down a path, so if you want someone to keep persevering (which I hope we generally do), it’s good to give them a boost when they do a good job.
[Question] How Much Does New Research Inform Us About Existential Climate Risk?
Thanks for writing this. I’ve had similar questions myself.
I think the incentives issue here is a big one. One way I’ve wondered about addressing it is to find a bunch of people who forecast really well and whose judgments are not substantially affected by forecasting incentives. Then have them forecast risks. Might that work, and has anyone tried it?
I’m excited to see this! One thing I’d mention on the historian path and its competitiveness is you could probably do a lot of this sort of work as an economic historian with a PhD in economics. Economic historians study everything from gender roles to religion and do ambitious if controversial quantitative analyses of long-term trends. While economists broadly may give little consideration to historical context, the field of economic history prides itself on actually caring about history for its own sake as well, so you can spend time doing traditional historian things, like working with archival documents (see the Preface to the Oxford Encyclopedia of Economic History for a discussion of the field’s norms).
The good thing here is it probably allows for greater outside options and potentially less competitiveness than a history PhD given the upsides of an economics PhD. You could also probably do similar work in political science.
>> Our impression is that although many of these topics have received attention from historians (examples: 1, 2, 3, 4, 5), some are comparatively neglected within the subject, especially from a more quantitative or impact-focused perspective.
I’d also note that some of these cites I don’t think are history—2 and 4 are written by anthropologists. (I think in the former case he’s sometimes classified as a biologist, psychologist, or economist too.)
I really do hope we have EAs studying history and fully support it, and I just wanted to give some closely related options!
Great post, and thanks for writing it. One note: if polarization is defined as “more extreme views on each issue” (e.g. more people wanting extremely high or extremely low taxes), then it does not seem to be happening according to some research. The sort of polarization happening in the U.S. is more characterized as ideological sorting. That is, views on any particular issue (abortion, affirmative action, gun control) don’t have more mass on the extremes than before, but the views in each political party are less mixed.
This is nonetheless important, and I don’t think it radically changes much of what you said. Affect toward the opposite party is still much more negative than before. But it might suggest we should be more concerned about the conflict between the parties itself (e.g. abusing constitutional norms, cancellation) and less concerned about their policies per se.
Great post, and I’m excited to see RP work on this. I have great confidence in your carefulness about this.
A concern I have with pretty much every approach to weighting welfare across species is that it seems like the correct weights may depend on the type of experience. For example, I could imagine the intensity of physical pain being very similar across species but the severity of depression from not being able to move to vary greatly.
Is there a way to allow for this within the approach you lay out here?
I found this informative:
Are you more funding- or talent-constrained?
Oscar: There are lots of researchers out there who would work on this if we offered them funding to do so.
Michelle: Wild Animal Initiative is primarily funding-constrained. Hiring can also be challenging, but not as much.
Peter: Funding-constrained. We have had to turn away talented people we didn’t have the funds to hire.
Given that most of the messaging in the EA community for a couple years has been that human capital constraints are greater than funding constraints, I was surprised to see this. I know there have been objections that this messaging is focused on longtermist and movement-building work and less representative of farmed animal advocacy, for example, but this is an update for me.
I had not read through the CEA mistakes page before (linked in your post), and I am very impressed with it. I wanted to note that I’m pleased and kind of touched that the page lists neglect of animal advocacy in the 2015 and 2016 EAGs. I was one of the advocates who was unhappy, and I was not sure whether there was recognition of this, so it was really meaningful to see CEA admit this and detail steps that are taken.
Very interesting! I wanted to note that this further supports Will’s comment on his recent post that understanding prior-setting better could be very high-impact.
Yeah, I agree the facile use of “white supremacy” here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.
I guess I still think it makes important enough arguments that I’d like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.
I think the concerns about utopianism are well-placed and merit more discussion in effective altruism. I’m sad to see the post getting downvoted.
Not posting this because I agree with it but rather because I think it’s one of the more influential econ papers actually dealing with the reality of addiction: Bernheim and Rangel 2004 those suffering from addiction have no control and are poorer (even then people of the same ex ante income), and for those not suffering from addiction it’s not obvious why they are irrational.
I think the conclusion is almost certainly wrong, but why it’s wrong is a bit subtle and hard to pin down, so I thought it might be a helpful thing to be aware of going into this. It’s published in the AER so it’s sort of an influential enhancement of Larks’s comment.
(Also full disclosure that Bernheim is my advisor. That mostly just makes me more perplexed by this paper.)
A nice, similar writeup along these lines is the book Portfolios of the Poor. Check it out if you want to go a bit more in-depth specifically on finances and how they affect daily life.
I obviously am a fan of this post! A few thoughts.
-
I don’t think sin taxes is the best phrase here. Sin taxes usually refer to internalities like cigarettes, but this is an externality more like a climate tax.
-
I like the soft institutions like research commissions and cabinet members but suspect the harder institutions like a veto or additional legislator or even a court will get captured and perverted. Almost all of these institutions rely on norms to actually care about future generations, and norms collapse every so often when there’s a reason to subvert them. Maybe this is just me looking at the current political moment, bit since we are talking about long time horizons, moments like this will recur, and I think it takes longer to salvage norms than it does to erode them. For example, claims I could see being made to justify any particular political agenda:
“We need to preserve our religious values for the sake of future generations” “We need to do [insert radical policy] to address the present crisis so that our civilization survives for future generations” “We must completely halt resource usage to preserve the earth for future generations” “We must maximize resource usage so that we grow as much as possible for future generations”
Etc.
For things like term lengths there’s a real literature on things like that in political economy that could help get a pretty good sense of expected impact.
-
Thanks for writing this! I was curious if you had research or particular observations that led you to the above approach. Last year I researched evidence-based policy somewhat, and I came away thinking that the focus on crafting research for what decision-makers wanted is in general over-rated. That may not always be the case, granted, and when research is already aimed at a specific decision-maker, it’s worth doing it right, but I guess I would highlight that I think a lot of especially foundational research has an impact in a more indirect way.
Sorry—you’re right that this doesn’t work. To clarify, I was thinking that the method of picking the color should be fixed ex-ante (e.g. “I pick red as the color with 50% probability”), but that doesn’t do the trick because you need to pool the colors for ambiguity to arise.
The issue is that the problem the paper identifies does not come up in your example. If I’m offered the two bets simultaneously, then an ambiguity averse decision maker, like an EU decision maker, will take both bets. If I’m offered the bets sequentially without knowing I’ll be offered both when I’m offered the first one, then neither an ambiguity-averse nor a risk-averse EU decision-maker will take them. The reason is that the first one offers the EU decision-maker a 50% chance of winning, so given risk-aversion its value is less than 50% of $1. So your example doesn’t distinguish a risk-averse EU decision-maker from an ambiguity-averse one.
So I think unfortunately we need to go with the more complicated examples in the paper. They are obviously very theoretical. I think it could be a valuable project for someone to translate these into more practical settings to show how these problems can come up in a real-world sense.