I research a wide variety of issues relevant to global health and development. I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
I donate 5% of my income and I’m gradually escalating (only two years out of college). I don’t plan to pledge because I don’t see the need for a commitment device. I don’t like the commitment framing—I think it takes a cudgel to an altruistic motivation that to me feels natural right now. Put differently:
I ran a different version of the [drowning child] thought experiment without language around obligation: “Imagine that – somehow – the universe has deemed me unalterably Good regardless of whether I help. Do I still want to rescue the child?”
...
After rewording Singer’s thought experiment, it dawned on me that I’d been using the frame of an “obligation” as a psychological whip to get myself to do what I already wanted to do. Weird.
When I went looking for the force or deity outside of myself that held the whip, there was nothing there. I was the one holding the whip. This was a revelation. The thing underlying my moral “obligation” came from me, my own mind. This underlying thing was actually a type of desire. It turned out that I wanted to help suffering people. I wanted to be in service of a beautiful world. Hm.
Suddenly the words of my moral vocabulary took on new meanings. Was I obligated to save the drowning child? Did I have a responsibility or moral duty? Was it something that one should or ought do? This language seemed misleading. These words seemed to replace my natural impulse to help with an artificial demand imposed by…nothing and no one.
The natural concern is that I’m naively assuming that my future self will share these values. That is correct.
If you are truly risk neutral, ruin games are good. The long term outcome is not worse, because the 99% of times when the world is destroyed are outweighed by the fact that it’s so much better 1% of the time. If you believe in risk neutrality as a normative stance, then you should be okay with that.
Put another way; if someone offers you a 99% bet for 1000x your money with a 1% chance to lose it all, you might want to take it once or twice. You don’t have to choose between “never take it” and “take it forever”. But if you find the idea of sequence dependence to be desirable in this situation, then you shouldn’t be risk neutral.
Deciding to apply risk aversion in some cases and risk neutrality in others is not special to ergodicity either. If you have a risk averse utility function the curvature increases with the level of value. I claim that for “small” values of lives at stake, my utility function is only slightly curved, so it’s approximately linear and risk neutrality describes my optimal choice better. However, for “large” values, the curvature dominates and risk neutrality fails.
Like most defenses of ergodicity economics that I have seen, this is just an argument against risk neutral utility.
Edit: I never defined risk neutrality. Expected utility theory says that people maximize the expectation of their utility function, . Risk neutrality means that is linear, so that maximizing expected utility is the exact same thing as maximizing expected value of the outcome . That is, . However, this is not true in general. If is concave, meaning that it satisfies diminishing marginal utility, then - in other words, the expected utility of a bet is less than the utility of its expected value as a sure thing. This is known as risk aversion.
Consider for instance whether you are willing to take the following gamble: you’re offered to press a button with a 51% chance of doubling the world’s happiness but a 49% chance of ending it. This problem, also known as Thomas Hurka’s St Petersburg Paradox, highlights the following dilemma: Maximizing expected utility suggests you should press it, as it promises a net positive outcome.
No. A risk neutral agent would press the button because they are maximizing expected happiness. A risk averse agent will get less utility from the doubled happiness than the utility they would lose from losing all of the existing happiness. For example, if your utility function is and current happiness is 1, then the expected utility of this bet is . Whereas the current utility is which is superior to the bet.
When you are risk averse, your current level of happiness/income determines whether a bet is optimal. This is a simple and natural way to incorporate the sequence-dependence that you emphasize. After winning a few bets, your income/happiness have grown so much that the value of marginal income/happiness is much lower than what you already have, so losing it is not worthwhile. Expected utility theory is totally compatible with this; no ergodicity economics needed to resolve this puzzle.
Now, risk aversion is unappealing to some utilitarians because it implies that there is diminishing value to saving lives, which is its own bullet to bite. But any framework that takes the current state of the world into account when deciding whether a bet is worthwhile has to bite that bullet, so it’s not like ergodicity economics is an improvement in that regard.
Good post, more detailed thoughts later, but one nitpick:
As far as I can tell, the deworming project being “one of the most successful RCTs to date” is just wrong. There is widespread disagreement about what we can conclude about deworming from the available evidence, with many respected academics saying that deworming has no effect on education at all. Many RCTs show a much larger effect.
I don’t think “most successful RCT” is supposed to mean “most effective intervention” but rather “most influential RCT”; deworming has been picked up by a bunch of NGOs and governments after Miguel and Kremer, plausibly because of that study.
(Conflict note: I know and like Ted Miguel)
I upvoted because I liked the story, but this feels like a pretty glaring strawman of “mathematical solutions to multifaceted human problems”. I can’t imagine any reasonable solution/intervention to which this critique would apply.
Came here to comment this. It’s the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it’s an abstraction.
I love this, thank you for pushing the frontiers of doing good!
Labor markets in LMICs (what we know about growth, part 3)
There were a bunch, most prominently IRRI in the Philippines—Table 1 in this paper lists all of them.
Interesting, then I figure it probably substituted for meat consumption at restaurants rather than meat consumption at home. Regardless, I think it’s mostly valid to use increase in plant based consumption as a proxy for a reduction in meat consumption since total food consumption is relatively stable.
Where are you getting that it didn’t decrease meat sales? I see nothing in the article pointing to that and they also point out that aggregate meat sales have been calling.
I would be extremely skeptical that vegan consumption could go up a lot without meat consumption going down, since that would imply people are just consuming a lot more food in aggregate compared to previous years, which seems unlikely.
I don’t have any disagreement with getting people information early, I just think characterizing the current system as one where only the criticizee benefits is wrong.
Yes, Ramsey discounting focuses on higher incomes of people in the future, which is the part I focused on. I probably shouldn’t have said “main”, but I meant that uncertainty over the future seems like the first order concern to me(and Ramsey ignores it).
Habryka’s comment:
applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
seems to be arguing for a zero discount rate.
Good point that growth-adjusted discounting doesn’t apply here, my main claim was incorrect.
If you think that the risk of extinction in any year is a constant , then the risk of extinction by year is , so that makes it the only principled discount rate. If you think the risk of extinction is time-varying, then you should do something else. I imagine that a hyperbolic discount rate or something else would be fine, but I don’t think it would change the results very much (you would just have another small number as the break-even discount rate).
Matthew is right that uncertainty over the future is the main justification for discount rates, but another principled reason to discount the future is that future humans will be significantly richer and better off than we are, so if marginal utility is diminishing, then resources are better allocated to us than to them. This classically gives you a discount rate of where is the applied discount rate, is a rate of pure time preference that you argue should be zero, is the growth rate of income, and determines how steeply marginal utility declines with income. So even if you have no ethical discount rate (), you would still end up with . Most discount rates are loaded on the growth adjustment () and not the ethical discount rate () so I don’t think longtermism really bites against having a discount rate. [EDIT: this is wrong, see Jack’s comment]
Also, am I missing something, or would a zero discount rate make this analysis impossible? The future utility with and without science is “infinite” (the sum of utilities diverges unless you have a discount rate) so how can you work without a discount rate?
You might want to add what the subject of Moretti (2021) is, and what the result is, just so people know if they’re interested in learning more.
But the plans I’ve seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
I see a pretty important benefit to the critic, because you’re ensuring that there isn’t some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That’s a particularly dramatic example that I don’t expect to generalize, but often if a criticism goes “X organization does something bad” the natural question is, why do they do that? Is there a reason that’s obvious in hindsight that they’ve thought about a lot, but I haven’t? Maybe there isn’t, but I would want to run a criticism by them just to see if that’s the case.
I don’t think people are obligated to build in the feedback they get extensively if they don’t think it’s valid/their point still stands.
The only thing I would add to Joseph’s response is that EAG Bay Area last year had a ton of undergrads.
My point is that AI could plausibly have rules for interacting with other “persons”, and those rules could look much like ours, but that we will not be “persons” under their code. Consider how “do not murder” has never applied to animals.
If AIs treat us like we treat animals then the fact that they have “values” will not be very helpful to us.
I think a neutral world is much better than extinction, and most dystopias are also preferable to human extinction. The latter is debatable but the former seems clear? What do you imagine by a neutral world?