Now: TYPE III AUDIO.
Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc.
Before that: My CV.
Side-projects: Inbox When Ready; Radio Bostrom; The Valmy; Comment Helper for Google Docs.
Thank you (again) for this.
I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.
As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there’s lots of convergence, and in practice your ranked list of interventions won’t change much (even if the diff between them does… after you adjust for cluelessness, Pascal’s mugging, etc).
Some worry that if you’re a fan of longtermist axiology then this approach to comms is disingenous. I strongly disagree: it’s normal to start your comms by finding common ground, and elaborate on your full reasoning later on.
Andrew Leigh MP seems to agree. Here’s the blurb from his recent book, “What’s The Worst That Could Happen?”: