I primarily write academic papers and do outreach through my blog. I do try to post here when possible (and I always appreciate cross-posts!), but please do check dthorstad.com for my academic papers and reflectivealtruism.com for outreach.
David Thorstad
Here’s the talk version for anyone who finds it easier to listen to videos:
Congrats Elliott! Looks like a nice paper.
Thanks Peter!
I wonder if you’d be willing to be a bit more vocal about this. For example, the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).I take that to be uncontroversial. Would you be willing to say so?
Thanks Caleb! I give reasons for skepticism about high levels of existential biorisk in Parts 9-11 of this series.
Yep—nailed it!
I’m one of the editors of the book. Just wanted to confirm everything Toby and Pablo said. It’s fully open-access, and about to be sent off for production. So while that doesn’t give us a firm release date, realistically we’re looking at early 2024 if we’re lucky and … not early 2024 if we’re not lucky.
Yep! Honestly, I’m not good at technology—how do I change without making all of my backlinks go dead? [Edit: Sorry, that’s probably the wrong term. I mean: all of my blog posts link to other blog posts. Is there a way to transfer domains that preserves all of those links?]
Thanks Ollie for your work on this program! You did a great job with it.
Thanks Dan! As mentioned, to think that cumulative risk is below 1-(10^-8) is to make a fairly strong claim about per-century risk. If you think we’re already there, that’s great!
Bostrom was actually considering something slightly stronger: the prospect of reducing cumulative risk by a further 10^(-8) from wherever it is at currently. That’s going to be hard even if you think that cumulative risk is already lower than I do. So for example, you can ask what changes you’d have to make to per-century risk to drop cumulative risk from N to r-(10^-8) for any r in [0,1). Honestly, that’s a more general and interesting way to do the math here. The only reason I didn’t do this is that (a) it’s slightly harder, and (b) most academic readers will already find per-century risk of ~one-in-a-million relatively implausible, and (c) my general aim was to illustrate the importance of carefully distinguishing between per-century risk and cumulative risk.It might be a good idea, in rough terms, to think of a constant hazard rate as an average across all centuries. I suspect that if the variance of risk across centuries is low-ish, this is a good idea, whereas if the variance of risk across centuries is high-ish, it’s a bad idea. In particular, on a time of perils view, focusing on average (mean) risk rather than explicit distributions of risk across centuries will strongly over-value the future, since a future in which much of the risk is faced early on is lower-value than a future in which risk is spread out.
Strong declining trends in hazard rates induce a time-of-perils like structure, except that on some models they might make a bit weaker assumptions about risk than leading time of perils models do. At least one leading time of perils model (Aschenbrenner) has a declining hazard structure. In general, the question will be how to justify a declining hazard rate, given a standard story on which (a) technology drives risk, and (b) technology is increasing rapidly. I think that some of the arguments against the time of perils hypothesis made in my paper “Existential risk pessimism and the time of perils” against the time of perils hypothesis will be relevant here, whereas others may be less relevant, depending on your view.
In general, I’d like to emphasize the importance of arguing for views about future rates of existential risk. Sometimes effective altruists are very quick to produce models and assign probabilities to models. Models are good (they make things clear!) but they don’t reduce the need to support models with arguments, and assignments of probability are not arguments, but rather statements in need of argument.
Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I’m making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.
For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it’s spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level.
I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I’d be interested to see how the numbers work out here, though.
Good catch Eevee—thanks! I hadn’t caught this when proofreading the upload on the website. (Not our operations team’s fault. They’ve been absolutely slammed with conference and event organizing recently, and I pushed them to rush this paper out so it would be available online).
Thanks Toby! Comments much appreciated.
Whoops, thanks!
I really liked and appreciated both of your posts. Please keep writing them, and I hope that future feedback will be less sharp.
Thanks for the kind words, Jamie!
I always appreciate engagement with the blog and I’m happy when people want to discuss my work on the EA Forum, including cross-posting anything they might find interesting. I also do my best to engage as I can on the EA Forum: I posted this blog update after several EA Forum readers suggested I do it.I’m hesitant to outright post my blog posts as EA Forum posts. Although this is in many senses a blog about effective altruism, I’m not an effective altruist, and I need to keep enough distance in terms of the readership I need to answer to, as well as how I’m perceived.
I wouldn’t complain if you wanted to cross-post any posts that you liked. This has happened before and I was glad to see it!
Thanks mhenric! Those are both good papers to consider and I’ll do my best to address them.
I didn’t know the “But is it altruism” paper. Please do send it when it is out—I’d like to read it any hopefully write about it.
Interesting! I think this should be manageable. Would people listen to this?
Thanks Jason! And yes, I’m a southern boy. Vandy is just what I was looking for. I appreciate the kind words and your continued readership.
Thanks mhendric! I appreciate the kind words.
The honest truth is that prestige hierarchies get in the way of many people writing good critiques of EA. For any X (=feminism, marxism, non-consequentialism, …) there’s much more glory in writing a paper about X than a paper about X’s implications for EA, so really the only way to get a good sense of what any particular X implies for EA is to learn a lot about X. That’s frustrating, because EAs genuinely want to know what X implies for EA, but don’t have years to learn.
Some publications (The good it promises volume; my blog) aim to bridge the gap, but there are also some decent papers if you’re willing to read full papers. Papers like the Pettigrew, Heikkinen, and Curran papers in the GPI working paper series are worth reading, and GPI’s forthcoming longtermism volume will have many others.
In the meantime … I share your frustration. It’s just very hard to convince people to sit down and spend a few years learning about EA before they write critiques of it (just like it’s very hard to convince EAs to spend a few years learning about some specific X just to see what X might imply for EA). I’m not entirely sure how we will bridge this gap, but I hope we do.
I’ll try to write more on the regression to the inscrutable and on AI papers. Any particular papers you want to hear about?
Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).