80,000 Hours just released their latest podcast episode, an interview with Christian Tarsney from the Global Priorities Institute. Topics discussed:
“Future bias”, or why people seem to care more about their future experiences than about their past experiences.
A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
How much of humanity’s resources we should spend on improving the long-term future
How large the expected value of the continued existence of Earth-originating civilization might be
How we should respond to uncertainty about the state of the world
The state of global priorities research
You can listen to the episode here or read the transcript here.
We’re currently crossposting each 80K podcast episode a month after it comes out, on 80K’s request (they want to have time to fix transcript errors, etc.). We include the full transcript when we crosspost, so that relevant terms will come up when people search for them.
I think posts like this (encouraging people to check out recent episodes) are also good and should exist. But I’m trying to figure out how best to combine the “official 80K transcripts” with the posts people create themselves.
Perhaps the best route is to have the full-transcript posts link to any other posts where discussion happened? Pablo, I’d be interested in any other suggestions you had.
I agree with your suggestions. Not sure I have anything insightful to add, except perhaps that the initial, non-official post could also be updated to point to the subsequent official release, to better integrate the two posts? One way to do this is to replace the “linkpost” external link with a link to the EA Forum post by 80k. So e.g. for this particular post one would replace https://80000hours.org/podcast/episodes/christian-tarsney-future-bias-fanaticism/with
https://forum.effectivealtruism.org/posts/[official-80k-post-announcing-tarsney-episode]/
. This would require asking the authors of the original post to update those links. (I’d assume everyone would be okay with it, but it may add a minor layer of friction for you.)I like the norm of having linkposts link back to the original work. Redirecting people to the Forum version of the transcript feels too much like trying to hack engagement / steal people from 80K. But I may ask people to put a link within their post to the Forum version: “You may find more discussion of the episode on 80K’s official Forum version.”
I talked to Rob, and he was happy with the suggestion of putting links to the “unofficial” posts at the top of the “official” ones, so we’ll do that going forward.
I think Tarsney is awesome in this episode… but maybe missed two opportunities here: i. The Berry Paradox is super cool, but the Paradox of the Question is equally addictive, and basically can be seen as a joke on Global Priorities studies. But yeah, some people say it’s not so paradoxical after all… ii. one can also look at the temporal asymmetry as a problem affecting intergenerational cooperation: if you don’t consider the interests of antecessors as (equally) important, then you can expect your successors will do the same to you, and you have fewer reasons to invest on the future. Even if you do have something like altruistic preferences towards future people, that preference is irrelevant for them. (Actually, I’m sort of surprised about how rare contractualist-like accounts of intertemporal justice are in EA literature—except for Sandberg’s piece on Rawls)
Because people in the far future can’t benefit us, save for immortality/revival scenarios, would contractualism give us much reason to ensure they come to exist, i.e. to continue to procreate and prevent extinction? Also, do contractualist theories tend to imply the procreation asymmetry, or even antinatalism?
It seems like contractualism and risk are tricky to reconcile, according to Frick, but he makes an attempt in his paper, Contractualism and Social Risk, discussed more briefly in section 1. Ethics of Risk here.
Well, you’re right that intergenerational cooperation lacks straight reciprocity… but we do have chains of cooperation that extend across time and often depend on the expectation that future people will sustain it—e g., think about pension funds and longterm debt, or maybe even just plain cultural transmission