I primarily write academic papers and do outreach through my blog. I do try to post here when possible (and I always appreciate cross-posts!), but please do check dthorstad.com for my academic papers and reflectivealtruism.com for outreach.
David Thorstad
To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn’t expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.
Here’s a gentle introduction to the kinds of worries people have (https://spectrum.ieee.org/power-problems-might-drive-chip-specialization). Of the cited references “the chips are down for moore’s law” is probably best on this issue, but a little longer/harder. There’s plenty of literature on problems with heat dissipation if you search the academic literature. I can dig up references on energy if you want, but with Sam Altman saying we need a fundamental energy revolution even to get to AGI, is there really much controversy over the idea that we’ll need a lot of energy to get to superintelligence?
Ah—that comes from the discontinuity claim. If you have accelerating growth that isn’t sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that’s harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
Here’s the talk version for anyone who finds it easier to listen to videos:
Summary: Against the Singularity Hypothesis (David Thorstad)
Congrats Elliott! Looks like a nice paper.
Thanks Peter!
I wonder if you’d be willing to be a bit more vocal about this. For example, the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).I take that to be uncontroversial. Would you be willing to say so?
Thanks Caleb! I give reasons for skepticism about high levels of existential biorisk in Parts 9-11 of this series.
Yep—nailed it!
I’m one of the editors of the book. Just wanted to confirm everything Toby and Pablo said. It’s fully open-access, and about to be sent off for production. So while that doesn’t give us a firm release date, realistically we’re looking at early 2024 if we’re lucky and … not early 2024 if we’re not lucky.
Yep! Honestly, I’m not good at technology—how do I change without making all of my backlinks go dead? [Edit: Sorry, that’s probably the wrong term. I mean: all of my blog posts link to other blog posts. Is there a way to transfer domains that preserves all of those links?]
Thanks Ollie for your work on this program! You did a great job with it.
Thanks Dan! As mentioned, to think that cumulative risk is below 1-(10^-8) is to make a fairly strong claim about per-century risk. If you think we’re already there, that’s great!
Bostrom was actually considering something slightly stronger: the prospect of reducing cumulative risk by a further 10^(-8) from wherever it is at currently. That’s going to be hard even if you think that cumulative risk is already lower than I do. So for example, you can ask what changes you’d have to make to per-century risk to drop cumulative risk from N to r-(10^-8) for any r in [0,1). Honestly, that’s a more general and interesting way to do the math here. The only reason I didn’t do this is that (a) it’s slightly harder, and (b) most academic readers will already find per-century risk of ~one-in-a-million relatively implausible, and (c) my general aim was to illustrate the importance of carefully distinguishing between per-century risk and cumulative risk.It might be a good idea, in rough terms, to think of a constant hazard rate as an average across all centuries. I suspect that if the variance of risk across centuries is low-ish, this is a good idea, whereas if the variance of risk across centuries is high-ish, it’s a bad idea. In particular, on a time of perils view, focusing on average (mean) risk rather than explicit distributions of risk across centuries will strongly over-value the future, since a future in which much of the risk is faced early on is lower-value than a future in which risk is spread out.
Strong declining trends in hazard rates induce a time-of-perils like structure, except that on some models they might make a bit weaker assumptions about risk than leading time of perils models do. At least one leading time of perils model (Aschenbrenner) has a declining hazard structure. In general, the question will be how to justify a declining hazard rate, given a standard story on which (a) technology drives risk, and (b) technology is increasing rapidly. I think that some of the arguments against the time of perils hypothesis made in my paper “Existential risk pessimism and the time of perils” against the time of perils hypothesis will be relevant here, whereas others may be less relevant, depending on your view.
In general, I’d like to emphasize the importance of arguing for views about future rates of existential risk. Sometimes effective altruists are very quick to produce models and assign probabilities to models. Models are good (they make things clear!) but they don’t reduce the need to support models with arguments, and assignments of probability are not arguments, but rather statements in need of argument.
Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I’m making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.
For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it’s spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level.
I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I’d be interested to see how the numbers work out here, though.
Good catch Eevee—thanks! I hadn’t caught this when proofreading the upload on the website. (Not our operations team’s fault. They’ve been absolutely slammed with conference and event organizing recently, and I pushed them to rush this paper out so it would be available online).
Thanks Toby! Comments much appreciated.
Whoops, thanks!
You folks impress me! But seriously, that’s a big ask.