I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future. Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
Yeah, it’s difficult to intuit, but I think that’s pretty clearly because we’re bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I’m fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I’m also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It’s harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I’m also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it’s hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule
Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.
> For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
I just don’t trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what’s real
And to be clear, I share the intuition that experience machines seem bad, and yet I’m often totally content to play video games all day long because it doesn’t violate those two conditions.
So what I’m roughly arguing is: We have some good reasons to be wary of experience machines, but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
people alive today have negative terminal value
This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)
You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.
At the very least, this doesn’t feel as obviously objectionable to me as the other proposed solutions to the “mere addition paradox”.
The Repugnant Conclusion Isn’t
The problem (of worrying that you’re being silly and getting mugged) doesn’t arise when probabilities are tiny, it’s when probabilities are tiny and you’re highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that “spending the next year of my life on AI Safety research” will prevent x-risk.
In the former cases, we have base rates and many trials. In the latter case, I’m just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.
Anyway, I mostly agree with what you’re saying, but it’s possible that you’re somewhat misunderstanding where the anxieties you’re responding to are coming from.
Thanks this is interesting, I wrote a bit about my own experiences here:
Under mainstream conceptions of physics (as I loosely understand them), the number of possible lives in the future is unfathomably large, but not actually infinite.
Longtermism does mess with intuitions, but it’s also not basing its legitimacy on a case from intuition. In some ways, it’s the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.
I originally wrote this post for my personal blog and was asked to cross-post here. I stand by the ideas, but I apologize that the tone is a bit out of step with how I would normally write for this forum.
Punching Utilitarians in the Face
I read the title and thought this was a really silly approach, but after reading through the list I am fairly surprised how sold I am on the concept. So thanks for putting this together!
Minor nit: One concern I still have is over drilling facts into my head which won’t be true in the future. For example, instead of:
> The average meat consumption per capita in China has grown 15-fold since 1961
I would prefer:
> Average meat consumption per capita in China grew 15x in the 60 years after 1961
This is great, thanks Michael. I wasn’t aware of the recent 2022 paper arguing against the Stevenson/Wolfers result. A couple questions:
In this talk (starting around 6:30), Peter Favaloro from Open Phil talks about how they use a utility function that grows logarithmically with income, and how this is informed by Stevenson and Wolfers (2008). If the scaling were substantially less favorable (even in poor countries), that would have some fairly serious implications for their cost-effectiveness analysis. Is this something you’ve talked to them about?
Second, just curious how the Progress Studies folk responded when you gave this talk at the Austin workshop.
For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.
How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I’m evaluating an intervention that’s valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
In general, WSJ reporting on SF crime has been quite bad. In another article they write
Much of this lawlessness can be linked to Proposition 47, a California ballot initiative passed in 2014, under which theft of less than $950 in goods is treated as a nonviolent misdemeanor and rarely prosecuted.
Which is just not true at all. Every state has some threshold, and California’s is actually on the “tough on crime” side of the spectrum.
Shellenberger himself is an interesting guy, though not necessarily in a good way.
Conversely, if sentences are reduced more than in the margin, common sense suggests that crime will increase, as observed in, for instance, San Francisco.
A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime
1. Shellenberger at the WSJ writes:
the charging rate for theft by Mr. Boudin’s office declined from 62% in 2019 to 46% in 2021; for petty theft it fell from 58% to 35%.
He doesn’t provide a citation, but I’m fairly confident he’s pulling these numbers from this SF Chronicle writeup, which is actually citing a change from 2018-2019 to 2020-2021. So right off the bat Shellenberger is fudging the data.
Second, the aggregated data is misleading because there were specific pandemic-effects in 2020 unrelated to Boudin’s policies. If you look at the DA office’s disaggregated data, there is a drop in filing rate in 2020, but it picks up dramatically in 2021. In fact, the 2021 rate is higher than the 2019 rate both for crime overall, and for the larceny/theft category. So not only is Shellenberger’s claim misleading, it’s entirely incorrect.
You can be skeptical of the DA office’s data, but note that this is the same source used by the SF Chronicle, and thus by Shellenberger as well.
2. Despite popular anecdotes, there’s really no evidence that crime was actually up in San Francisco, or that it occurred as a result of Boudin’s policies.
- Actual reported shoplifting was down from 2019-2020
- Reported shoplifting in adjacent countries was down less than in California as a whole, indicating a lack of “substitution effects” where criminals go where sentences are lighter
- The store closures cited by Shellenberger can’t be pinned on increased crime under Boudin because:
A) Walgreens had already announced a plan to close 200 stores back in 2019
B) Of the 8 stores that closed in 2019 and 2020, at least half closed in 2019, making the 2020 closures unexceptional
C) The 2021 store closure rate for Walgreens is actually much lower than comparable metrics, like the closures of sister company Duane Reader in NYC over the same year, or the dramatic drop in Walgreens stock price. It is also not much higher than the historical average of 3.7 store closures per year in SF.
I have a much more extensive writeup on all of this here:
Finally, the problem with the “common sense” reasoning is that it goes both ways. Yes, it seems reasonable to think that less punishment would result in more crime, but we can similarly intuit that spending time in prison and losing access to legal opportunities would result in more crime. Or that having your household’s primary provider incarcerated would lead to more crime. Etc etc. Yes, we are lacking in high quality evidence, but that doesn’t mean we can just pick which priors to put faith in.
Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I’ve shared link posts to my blog, and others have shared link posts to their substacks, but I haven’t see anyone share a link post to their own paid substack before.