# Timeline Utilitarianism

### To­tal vs av­er­age utilitarianism

Say you have one util­ity (think of util­ity as hap­piness points) and you have to choose be­tween ei­ther cre­at­ing an­other per­son that has two util­ity, or in­creas­ing your own util­ity to two.

A to­tal util­i­tar­ian would choose the first op­tion. It’s the one that gen­er­ates the high­est amount of to­tal util­ity in the next tick (the next mo­ment in time). An av­er­age util­i­tar­ian would choose the sec­ond op­tion since it gen­er­ates the high­est amount of av­er­age util­ity in the next tick.
Here these two the­o­ries ar­gue for how we should ag­gre­gate util­ity for the next mo­ment in time. But how should we ag­gre­gate util­ity over a pe­riod of time?

### In­tro­duc­ing timeline utilitarianism

Say you are about to die and you have two util­ity. You have to choose be­tween ei­ther just dy­ing, or dy­ing while cre­at­ing an­other per­son that has one util­ity.

Let’s look at how a timeline can ag­gre­gate the to­tal amount of util­ity.
Timeline A has two ticks. You could ag­gre­gate the to­tal amount of util­ity of this timeline by sim­ply adding the two ticks to­gether (2+1). Let’s call this method of ag­gre­gat­ing “to­tal timeline util­i­tar­i­anism”. Here we can see that timeline A would be a bet­ter choice than timeline B since timeline B only has one tick and there­fore only two util­ity.

You could also ag­gre­gate the util­ity of timeline A by tak­ing the av­er­age of the two ticks ((2+1)÷2). Let’s call this method of ag­gre­gat­ing “av­er­age timeline util­i­tar­i­anism”. Here we can see that timeline B would be a bet­ter choice than timeline A since timeline B only has one tick and there­fore two util­ity.

### Com­bin­ing mo­ment and timeline utilitarianism

So we have “mo­ment util­i­tar­i­anism” to look at mo­ments in time and “timeline util­i­tar­i­anism” to look at the en­tire timeline. What hap­pens if we com­bine them? Let’s in­tro­duce some terms. In “to­tal av­er­age util­i­tar­i­anism” the “to­tal” refers to how we should ag­gre­gate the en­tire timeline. The “av­er­age” refers to how we should ag­gre­gate the in­di­vi­d­ual mo­ments. I will always men­tion the timeline ag­gre­ga­tion first and the mo­ment ag­gre­ga­tion sec­ond. There are four differ­ent com­bi­na­tions that all make differ­ent claims about how we should act.

If we want to max­i­mize to­tal to­tal util­ity we should choose timeline A. If we want max­i­mize av­er­age to­tal util­ity we should choose timeline B. If we want to max­i­mize to­tal av­er­age util­ity we should choose timeline C. If we want to max­i­mize av­er­age av­er­age util­ity we should choose timeline D.
Usu­ally when peo­ple talk about differ­ent types of util­i­tar­i­anism they au­to­mat­i­cally pre­sup­pose “to­tal timeline util­i­tar­i­anism”. In fact, the cur­rent de­bate be­tween to­tal and av­er­age util­i­tar­i­anism is ac­tu­ally a de­bate be­tween “to­tal to­tal util­i­tar­i­anism” and “to­tal av­er­age util­i­tar­i­anism”. I hope this post has pointed out that this as­sump­tion isn’t the only op­tion.

### Wrap­ping up

In re­al­ity we have many more op­tions to choose from and we will have to do com­pli­cated prob­a­bil­ity calcu­la­tions un­der un­cer­tainty in­stead of fol­low­ing a sim­ple de­ci­sion tree. Some might ar­gue that non-ex­is­tence should count as zero util­ity. Some might ar­gue for more ex­otic forms of util­i­tar­i­anism like me­dian or mode util­i­tar­i­anism (I hope you don’t spend too much time fret­ting over which of these op­tions is the “cor­rect” form of util­i­tar­i­anism and adopt some­thing like meta-prefer­ence util­i­tar­i­anism in­stead). This is just a sim­plified model to in­tro­duce the con­cept of timeline util­i­tar­i­anism. In fu­ture posts I will ex­pand on this con­cept and ex­plore how it in­ter­acts with things like hingey­ness and choice un­der un­cer­tainty.

• In­ter­est­ing idea!

In light of the rel­a­tivity of si­mul­tane­ity, that whether A hap­pens be­fore or af­ter B can de­pend on your refer­ence frame, you might have to just choose a refer­ence frame or some­how ag­gre­gate over mul­ti­ple refer­ence frames (and there may be no prin­ci­pled way to do so). If you just choose your own refer­ence frame, your the­ory be­comes agent-rel­a­tive, and may lead to dis­agree­ment about what’s right be­tween peo­ple with the same moral and em­piri­cal be­liefs, ex­cept for the fact they’re choos­ing differ­ent refer­ence frames.

Maybe the point was mostly illus­tra­tive, but I’d lean against us­ing any kind of av­er­age (in­clud­ing mean, me­dian, etc.) with­out spe­cial care for nega­tive cases. If the av­er­age is nega­tive, you can im­prove it by adding nega­tive lives, as long as they’re bet­ter than the av­er­age.

• Per­son­ally, I’ve always un­der­stood to­tal util­i­tar­i­anism to already be across both time and space, as it is of­ten con­trasted not just with av­er­age util­i­tar­i­anism but with per­son-af­fect­ing/​prior ex­is­tence views.

• Yes, (to­tal) to­tal util­i­tar­i­anism is both across time and space, but you can ag­gre­gate across time and space in many differ­ent ways. E.g me­dian to­tal util­i­tar­i­anism is also both across time and space, but it ag­gre­gates very differ­ently.

• Right, I guess what I mean is that in an EA con­text, I’ve his­tor­i­cally un­der­stood to­tal util­i­tar­i­anism to be to­tal (an in­te­gral) across both time and space, rather than to­tal in one di­men­sion but not the other.

• I think so too, be­cause you can’t re­ally talk about ethics with­out a timeframe. I wasn’t try­ing to ar­gue that peo­ple don’t use timeframes, but rather that peo­ple au­to­mat­i­cally use to­tal timeline util­i­tar­i­anism with­out re­al­iz­ing that other op­tions are even pos­si­ble. This was what I was try­ing to get at by say­ing:

Usu­ally when peo­ple talk about differ­ent types of util­i­tar­i­anism they au­to­mat­i­cally pre­sup­pose “to­tal timeline util­i­tar­i­anism”. In fact, the cur­rent de­bate be­tween to­tal and av­er­age util­i­tar­i­anism is ac­tu­ally a de­bate be­tween “to­tal to­tal util­i­tar­i­anism” and “to­tal av­er­age util­i­tar­i­anism”.

• Got it, I must have just mis­read your post then! :) Thanks for your pa­tience in the clar­ifi­ca­tion!

• No prob­lem! As a non-na­tive en­glish speaker this was an ex­tremely difficult post to write, hence why I leaned so heav­ily on images. If you (or any­one) have any sug­ges­tions for how I could re­word this post to make it clearer, please let me know.

EDIT: I’ve changed the word “stan­dard util­i­tar­i­anism” into “mo­ment util­i­tar­i­anism”, I hope this clears up some of the con­fu­sion.

• The ques­tion of how to ag­gre­gate over time may even have im­por­tant con­se­quences for pop­u­la­tion ethics para­doxes. You might be in­ter­ested in read­ing Vanessa Kosoy’s the­ory here in which she sums an in­di­vi­d­ual’s util­ity over time with an in­creas­ing penalty over life-span. Although I’m not clear on the jus­tifi­ca­tion for these choices, the con­se­quences may be ap­peal­ing to many: Vanessa, her­self, em­pha­sizes the con­se­quences on eval­u­at­ing as­tro­nom­i­cal waste and fac­tory farm­ing.

• Hey Bob, good post. I’ve had the same thought (i.e. the unit of moral anal­y­sis is timelines, or prob­a­bil­ity dis­tri­bu­tions of timelines) with differ­ent formalism

The trol­ley prob­lem gives you a choice be­tween two timelines (). Each timeline can be rep­re­sented as the set con­tain­ing all state­ments that are true within that timeline. This rep­re­sen­ta­tion can neatly state whether some­thing is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines con­tain state­ments that are com­bined as well as state­ments that are at­om­ized. For ex­am­ple, since “You pull the lever”, “The five live”, and “The one dies” are all el­e­ments of , you can string these into a larger state­ment that is also in : “You pull the lever, and the five live, and the one dies”. There­fore, each timeline con­tains a very large state­ment that uniquely iden­ti­fies it within any finite sub­set of . How­ever, timelines won’t be our unit of anal­y­sis be­cause the state­ments they con­tain have no sub­jec­tive em­piri­cal un­cer­tainty.

This un­cer­tainty can be in­cor­po­rated by us­ing a prob­a­bil­ity dis­tri­bu­tion of timelines, which we’ll call a fore­cast (). Though there is no un­cer­tainty in the trol­ley prob­lem, we could still rep­re­sent it as a choice be­tween two fore­casts: guaran­tees (the pull-the-lever timeline) and guaran­tees (the no-ac­tion timeline). Since each timeline con­tains a state­ment that uniquely iden­ti­fies it, each fore­cast can, like timelines, be rep­re­sented as a set of state­ments. Each state­ment within a fore­cast is an em­piri­cal pre­dic­tion. For ex­am­ple, would con­tain “The five live with a cre­dence of 1”. So, the trol­ley prob­lem re­veals that you ei­ther morally pre­fer (de­noted as ), pre­fer (de­noted as ), or you be­lieve that both fore­casts are morally equiv­a­lent (de­noted as ).

• (In light/​prac­tice of ad­vice I’ve read to just go ahead and com­ment with­out always try­ing to write some­thing su­per sub­stan­tive/​elo­quent, I’ll say that) I’m definitely in­ter­ested in this idea and in eval­u­at­ing it fur­ther, es­pe­cially since I’m not sure I re­ally thought about this in an ex­plicit way be­fore (since I gen­er­ally just think “av­er­age per each per­son/​en­tity’s ag­gre­gate [over time] vs. sum ag­gre­gate of all en­tities,” with­out fo­cus­ing that much on a dis­tinc­tion be­tween an en­tity’s ag­gre­gate over time and that same en­tity’s av­er­age over time). Such an ap­proach might have par­tic­u­lar rele­vance un­der mod­els that take a less uni­tary/​con­sis­tent view of hu­man con­scious­ness. I’ll have to leave this open and come back to it with a fresh/​rested mind, but for now I think it’s worth an up­vote for at least mak­ing me rec­og­nize that I may not have con­sid­ered a ques­tion like this be­fore.

• Thanks for the post. Coin­ci­den­tally, I was think­ing about how I have a strong moral prefer­ence for a longer timeline when I saw it.
I feel at­tracted by to­tal to­tal util­i­tar­i­anism, but sup­pose we have N in­di­vi­d­u­als, each liv­ing 80y, with the same con­stant util­ity U. Now, these in­di­vi­d­u­als can ei­ther live more con­cen­trated (say, in 100y) or more scat­tered (say, in 10000y) in time; I strongly pre­fer the lat­ter (I’d pay some util­ity for it) - even though it runs against any no­tion of (pure) tem­po­ral dis­count. My in­tu­ition (though I don’t trust it) is that, from the “point of view of nowhere”, at some point, length may trump pop­u­la­tion; but maybe it’s just some ad hoc in­fluence of a strong bias against ex­tinc­tion.
Please, let me know about any source dis­cussing this (I ad­mit I didn’t search enough for it).