Ask Me Anything!

Thanks for all the ques­tions, all—I’m go­ing to wrap up here! Maybe I’ll do this again in the fu­ture, hope­fully oth­ers will too!


Hi,

I thought that it would be in­ter­est­ing to ex­per­i­ment with an Ask Me Any­thing for­mat on the Fo­rum, and I’ll lead by ex­am­ple. (If it goes well, hope­fully oth­ers will try it out too.)

Below I’ve writ­ten out what I’m cur­rently work­ing on. Please ask any ques­tions you like, about any­thing: I’ll then ei­ther re­spond on the Fo­rum (prob­a­bly over the week­end) or on the 80k pod­cast, which I’m hope­fully record­ing soon (and maybe as early as Fri­day). Apolo­gies in ad­vance if there are any ques­tions which, for any of many pos­si­ble rea­sons, I’m not able to re­spond to.

If you don’t want to post your ques­tion pub­li­cly or non-anony­mously (e.g. you’re ask­ing “Why are you such a jerk?” sort of thing), or if you don’t have a Fo­rum ac­count, you can use this Google form.


What I’m up to

Book

My main pro­ject is a gen­eral-au­di­ence book on longter­mism. It’s com­ing out with Ba­sic Books in the US, Oneworld in the UK, Volante in Swe­den and Gimm-Young in South Korea. The work­ing ti­tle I’m cur­rently us­ing is What We Owe The Fu­ture.

It’ll hope­fully com­ple­ment Toby Ord’s forth­com­ing book. His is fo­cused on the na­ture and like­li­hood of ex­is­ten­tial risks, and es­pe­cially ex­tinc­tion risks, ar­gu­ing that re­duc­ing them should be a global pri­or­ity of our time. He de­scribes the longter­mist ar­gu­ments that sup­port that view but not rely­ing heav­ily on them.

In con­trast, mine is fo­cused on the philos­o­phy of longter­mism. On the cur­rent plan, the book will make the core case for longter­mism, and will go into is­sues like dis­count­ing, pop­u­la­tion ethics, the value of the fu­ture, poli­ti­cal rep­re­sen­ta­tion for fu­ture peo­ple, and tra­jec­tory change ver­sus ex­tinc­tion risk miti­ga­tion. My goal is to make an ar­gu­ment for the im­por­tance and ne­glect­ed­ness of fu­ture gen­er­a­tions in the same way An­i­mal Liber­a­tion did for an­i­mal welfare.

Roughly, I’m ded­i­cat­ing 2019 to back­ground re­search and think­ing (in­clud­ing post­ing on the Fo­rum as a way of forc­ing me to ac­tu­ally get thoughts into the open), and then 2020 to ac­tu­ally writ­ing the book. I’ve given the pub­lish­ers a dead­line of March 2021 for sub­mis­sion; if so, then it would come out in late 2021 or early 2022. I’m plan­ning to speak at a small num­ber of uni­ver­si­ties in the US and UK in late Septem­ber of this year to get feed­back on the core con­tent of the book.

My aca­demic book, Mo­ral Uncer­tainty, (co-au­thored with Toby Ord and Krister Bykvist) should come out early next year: it’s been sub­mit­ted, but OUP have been ex­cep­tion­ally slow in pro­cess­ing it. It’s not rad­i­cally differ­ent from my dis­ser­ta­tion.

Global Pri­ori­ties Institute

I con­tinue to work with Hilary and oth­ers on the strat­egy for GPI. I also have some pa­pers on the go:

  • The case for longter­mism, with Hilary Greaves. It’s mak­ing the core case for strong longter­mism, ar­gu­ing that it’s en­tailed by a wide va­ri­ety of moral and de­ci­sion-the­o­retic views.

  • The Ev­i­den­tial­ist’s Wager, with Aron Val­lin­der, Carl Shul­man, Cas­par Oester­held and Jo­hannes Treut­lein ar­gu­ing that if one aims to hedge un­der de­ci­sion-the­o­retic un­cer­tainty, one should gen­er­ally go with ev­i­den­tial de­ci­sion the­ory over causal de­ci­sion the­ory.

  • A pa­per, with Tyler John, ex­plor­ing the poli­ti­cal philos­o­phy of age-weighted vot­ing.

I have var­i­ous other draft pa­pers, but have put them on the back burner for the time be­ing while I work on the book.

Forethought Foundation

Forethought is a sister or­gani­sa­tion to GPI, which I take re­spon­si­bil­ity for: it’s legally part of CEA and in­de­pen­dent from the Univer­sity, We had our first class of Global Pri­ori­ties Fel­lows this year, and will con­tinue the pro­gram into fu­ture years.

Utili­tar­i­anism.net

Dar­ius Meiss­ner and I (with help from oth­ers, in­clud­ing Aron Val­lin­der, Pablo Staffor­ini and James Aung) are cre­at­ing an in­tro­duc­tion to clas­si­cal util­i­tar­i­anism at util­i­tar­i­anism.net. Even though ‘util­i­tar­i­anism’ gets sev­eral times the search traf­fic of terms like ‘effec­tive al­tru­ism,’ ‘givewell,’ or ‘pe­ter singer’, there’s cur­rently no good on­line in­tro­duc­tion to util­i­tar­i­anism. This seems like a missed op­por­tu­nity. We aim to put the web­site on­line in early Oc­to­ber.

Cen­tre for Effec­tive Altruism

We’re down to two very promis­ing can­di­dates in our CEO search; this con­tinues to take up a sig­nifi­cant chunk of my time.

80,000 Hours

I meet reg­u­larly with Ben and oth­ers at 80,000 Hours, but I’m cur­rently con­sid­er­ably less in­volved with 80k strat­egy and de­ci­sion-mak­ing than I am with CEA.

Other

I still take on se­lect me­dia, es­pe­cially pod­casts, and se­lect speak­ing en­gage­ments, such as for the Giv­ing Pledge a few months ago.

I’ve been tak­ing more va­ca­tion time than I used to (plan­ning six weeks in to­tal this year), and I’ve been deal­ing on and off with chronic mi­graines. I’m not sure if the ad­di­tional va­ca­tion time has de­creased or in­creased my over­all pro­duc­tivity, but the mi­graines have de­creased it by quite a bit.

I am con­tin­u­ing to try (and of­ten fail) to be­come more fo­cused in what work pro­jects I take on. My long-run ca­reer aim is to strad­dle the gap be­tween re­search com­mu­ni­ties and the wider world, rep­re­sent­ing the ideas of effec­tive al­tru­ism and longter­mism. This pushes me in the di­rec­tion of pri­ori­tis­ing re­search, writ­ing, and se­lect me­dia, and I’ve made progress in that di­rec­tion, but my time is still more split than I’d like.