Senior Research Scientist at NTT Research, Physics & Informatics Lab. jessriedel.com , jessriedel[at]gmail[dot]com
Jess_Riedel
Vacuum Decay: Expert Survey Results
Oh yea, I didn’t mind the title at all (although I do think it’s usefully more precise now :)
Agreed on additively separable utility being unrealistic. My point (which wasn’t clearly spelled out) was not that GDP growth and unit production can’t look dramatically. (We already see that in individual products like transistors (>> GDP) and rain dances (<< GDP).) It was that post-full-automation isn’t crucially different than pre-full-automation unless you make some imo pretty extreme assumptions to distinguish them.
By “extracting this from our utility function”, I just mean my vague claim that, insofar as we are uncertain about GDP growth post-full-automation, understanding better the sorts of things people and superhuman intelligences want will reduce that uncertainty more than learning about the non-extreme features of future productivity heterogeneity (although both do matter if extreme enough). But I’m being so vague here that it’s hard to argue against.
Thanks!
My point is that the existence of a human-only good satisfying (1) and (2) is unnecessary: the very same effect can arise even given true full automation, not due to a limitation of our ability to fully automate, but due to the fact that a technological advance can encompass full automation and go beyond yielding a world where we can produce way more of everything we would ever produce without the advance, by letting us produce some goods we otherwise wouldn’t have been on track to produce at all. This has not been widely appreciated.
OK but this key feature of not being on track to produce Good 2 only happens in your model specifically because you define automation to be a thing that takes Good-2 productivity from 0 to something positive. I think this is in conflict with the normal understanding of what “automation” means! Automation is usually understood to be something that increases the productivity of something that we could already produce at least a little of in principle, even if the additional efficiency means actual spending on a specific product goes from 0 to 1. And as long as we could produce a little of Good 2 pre-automation, the utility function in your model implies that the spending in the economy would eventually be dominated by Good 2 (and hence GDP growth rates would be set by the growth in productivity of Good 2) even without full automation (unless the ratio of Good-1 and Good-2 productivity is growing superexponentially in time).
What kind product would we be unable to produce without full automation, even given arbitrary time to grow? Off the top of my head I can only think of something really ad-hoc like “artisanal human paintings depicting the real-world otherwise-fully-autonomous economy”.
That’s basically what makes me think that “the answer is already in our utility function”, which we could productively introspect on, rather than some empirical uncertainty about what products full automation will introduce.
presumably full automation will coincide with not only a big increase in productivity growth (which raises GDP growth, in the absence of a random “utility function kink”) but also a big change in the direction of productivity growth, including via making new products available (which introduces the kind of “utility function kink” that has an arbitrary effect on GDP growth).
I’m not sure what the best precise math statement to make here is, but I suspect that at least for “separable” utility functions of the form you need either a dramatic difference in diminishing returns for the (e.g., log vs. linear as in your model) or you need a super dramatic difference in the post-full-automation productivity growth curves (e.g., one grows exponentially and the other grows superexponentially) that is absent pre-automation. (I don’t think it’s enough that the productivities grow at different rates post-automation.) So I still think we can extract this from our utility function without knowing much about the future, although maybe there’s a concrete model that would show that’s wrong.
The Baumol point is that among a set of already existing goods which we don’t see as very substitutable, GDP growth can be pulled down arbitrarily by the slow-growing goods. …The point I’m making here is that even if we fully automate production, and even if the quantity of every good existing today then grows arbitrarily quickly, we might create new goods as well. Once we do so, if the production of old goods grows quickly while our production of the new goods doesn’t, GDP growth may be slow.
Not sure this is disagreement per se, but I think the surprising behavior of GDP in your model is almost entirely due to the shape of the utility function and doesn’t have much to do with either (1) the distinction between existing vs new products or (2) automation. In other words, I think this is still basically Baumol, although I admit to a large extent I’m just arguing here about preferred conceptual framing rather than facts.
Consider modifying your models as follows (which I presume makes it more like a traditional Baumol model):
There is only one time period
Good 2 is always available
Productivity of Good 1 grows at rates and productivity of Good 2 grows at the much slower rate . Specifically, the production possibility frontier at time is , under constraints , where .
Using your same utility function , production is fully devoted to Good 1 for all times , and during that time GDP is growing at . Then at time , it becomes worthwhile to start producing Good 2. For , the productivity growth rate of Good 1 remains much higher than the that of Good 2 (), and indeed the number of units of Good 1 produced grows exponentially faster than Good 2:
,
Nonetheless, the marginal value of Good 1 plummets due to the log in the utility function. Specifically, the relative price of Good 1 falls exponentially in time, where , as does Good 1′s price-weighted fraction of production:GDP growth falls from and exponentially asymptotes to for large .
Two points on the model:Although production of Good 2 was 0 for , this isn’t because it was unavailable or impossibly expensive. Indeed, Good 1 is getting easier to produce relative to Good 2 for all times, so in some sense Good 2 is (relatively) easiest to produce in the distant past. Rather, no units of Good 2 are produced for because they just aren’t valued according to the chosen utility function.
The transition from the all-Good-1-economy () to the mostly-Good-2-economy () is due to hitting the key point in the utility curve, not to any changes in productivity growth rates (which remain constant). You can throw in automation (i.e., a productivity shock) at any point, say, increasing both and by a constant factor , and you’ll still have a net fall in GDP growth rates so long as .
Ok, so then what are the take-aways for AI? By cleanly separating the utility-function effect from shocks to productivity, I think this is reason for us to believe that the past is a reasonable guide to the future. Yes, there could be weird kinks in our utility function, but in terms of revealing kinks there’s not much reason to think that AI-induced productivity gains will be importantly different than productivity gains from the past.
What quantity should we measure if not GDP?
First, we might consider this: Take the yearly output of today’s economy (2B tons of steel, etc.) and, for each future date, divide the future GDP by the cost (at that time) of producing today’s basket of goods. But this doesn’t work: this growth rate will, for large times, be dominated by growth rate of the good in the today’s basket whose production is growing slowest in the future. (In our model, that would be Good 2 growing at rate .) So it’s still Baumol vulnerable.
Then, we might consider: compute the value of all the good produced at a future date using today’s prices. For a single year this is just real GDP, but isn’t the same thing as real GDP for more than 1 year because it isn’t chained. And lack of chaining is why this quantity is bad: if you have a niche good whose production is skyrocketing, this growth rate would explode even if price was similarly falling and nothing else amazing was happening in the economy. For example, if 1 transistor cost ~$1k in 1949, and we now produce almost a sextillion (~ ) per year, that would value today’s economy at about an octillion () dollars. In our model, this would be Good 1 growing at rate , but this isn’t what we want.
I think there’s just no getting around the fact that the kind of growth we care about is unavoidably wrapped up in our utility function. But as long as some fraction of people want to build Jupiter brains and explore Andromeda enough that they don’t devote ~all their efforts to goals that are intrinsically upper bounded, I expect AGI to lead to rapid real GDP growth (although it does likely eventually end with light-speed limits or whatever).
If growth were slow post-singularity, I think that would imply something pretty weird about human utility in this universe (or rather, the utility of the beings controlling the economy). There could of course still be crazy things happening like wild increases in energy usage at the same time, but this isn’t too different than how wild the existence of nanometer-scale transistors are relative to pre-industrial civilization. If you care about those crazy things independent of GDP (which is a measure of how fast the world overall is getting what it wants), you should probably just measure them directly, e.g., energy usage, planets colonized, etc.
Nice post. The argument for simultaneity (most models train at the same time, and then are evaled at the same time, and then released at the same time) seems ultimately to be based on the assumption that the training cap grows in discrete amounts (say, a factor of 3) each year.
> * We want to prevent model training runs of size N+1 until alignment researchers have had time to study and build evals based on models of size N.
> * For-profit companies will probably all want to start going as soon as they can once the training cap is lifted.
> * So there will naturally be lots of simultaneous runs...
But why not smoothly raise the size limit? (Or approximate that by raising it in small steps every week or month?) The key feature of this proposal, as I see it, is that there is a fixed interval for training and eval, to prevent incentives to rush. But that doesn’t require simultaneity.
I listed this example in my comment, it was incorrect by an order of magnitude, and it was a retrodiction. “I didn’t look up the data on Google beforehand” does not make it a prediction.
I’m also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change. (Like, for sure someone should be doing it, but I’m surprised you’re concentrating on it much.) As you note, the history of automation is one of smooth adoption. And, as I think Eliezer said (roughly), there don’t seem to be many cases where new tech was predicted based on when some low-level metric would exceed the analogous metric in a biological system. The key threshold for recursive feedback loops (*especially* compute-driven ones) is how well they perform on the relevant tasks, not all tasks. And the way in which machines perform tasks usually looks very different than how biological systems do it (bird vs. airplanes, etc.).
If you think that compute is the key bottleneck/driver, then I would expect you to be strongly interested in what the automation of the semiconductor industry would look like.
I like this post a lot but I will disobey Rapoport’s rules and dive straight into criticism.
Historically, many AI researchers believed that creating general AI would be more about coming up with the right theories of intelligence, but over and over again, researchers eventually found that impressive results only came after the price of computing fell far enough that simple, “blind” techniques began working (Sutton 2019).
I think this is a poor way to describe a reasonable underlying point. Heavier-than-air flying machines were pursued for centuries, but airplanes appeared almost instantly (on a historic scale) after the development of engines with sufficient power density. Nonetheless, it would be confusing to say “flying is more about engine power than the right theories of flight”. Both are required. Indeed, although the Wright brothers were enabled by the arrival of powerful engines, they beat out other would-be inventors (Ader, Maxim, and Langley) who emphasized engine power over flight theory. So a better version of your claim has to be something like “compute quantity drives algorithmic ability; if we independently vary compute (e.g., imagine an exogenous shock) then algorithms follow along”, which (I think) is what you arguing further in the post.
But this also doesn’t seem right. As you observe, algorithmic progress has been comparable to compute progress (both within and outside of AI). You list three “main explanations” for where algorithmic progress ultimately comes from and observe that only two of them explain the similar rates of progress in algorithms and compute. But both of these draw a causal path from compute to algorithms without considering the (to-me-very-natural) explanation that some third thing is driving them both at a similar rate. There are a lot of options for this third thing! Researcher-to-researcher communication timescales, the growth rate of the economy, the individual learning rate of humans, new tech adoption speed, etc. It’s plausible to me that compute and algorithms are currently improving more or less as fast as they can, given their human intermediaries through one or all of these mechanisms.
The causal structure is key here, because the whole idea is to try and figure out when economic growth rates change, and the distinction I’m trying to draw becomes important exactly around the time that you are interested in: when the AI itself is substantially contributing to its own improvement. Because then those contributions could be flowing through at least three broad intermediaries: algorithms (the AI is writing its own code better), compute (the AI improves silicon lithography), or the wider economy (the AI creates useful products that generate money which can be poured into more compute and human researchers).
Of course, even if AI performance is, in principle, predictable as a function of scale, we lack data on how AIs are currently improving on the vast majority of tasks in the economy, hindering our ability to predict when AI will be widely deployed. While we hope this data will eventually become available, for now, if we want to predict important AI capabilities, we are forced to think about this problem from a more theoretical point of view.
Humans have been automating mechanical task for many centuries, and information-processing tasks for many decades. Moore’s law, the growth rate of the thing (compute) that you ague drives everything else, has been stated explicitly for almost 58 years (and presumably applicable for at least a few decade before that). Why are you drawing a distinction between all the information processing that happened in the past and “AI”, which you seem to be taking as a basket of things that have mostly not had a chance to be applied yet (so no data)?
If compute is the central driving force behind AI, and transformative AI (TAI) comes out of something looking like our current paradigm of deep learning, there appear to be a small set of natural parameters that can be used to estimate the arrival of TAI. These parameters are:
The total training compute required to train TAI
The average rate of growth in spending on the largest training runs, which plausibly hits a maximum value at some significant fraction of GWP
The average rate of increase in price-performance for computing hardware
The average rate of growth in algorithmic progress
This list is missing the crucial parameters that would translate the others into what we agree is most notable: economic growth. I think needs to be discussed much more in section 4 for it to be a useful summary/invitation to the models you mention.
I agree it’s important to keep the weaker fraud protection on debit cards in mind. However, for the use I mentioned above, you can just lock the debit card and only unlock it when you have a cash flow problem. (Btw, if you don’t use your IB debit card, you should lock it even if you aren’t using it.) Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.
That said, I have most of my net worth elsewhere, so I’m less worried about tail risks than you would reasonably be if you’re mostly invested through IB.
If you have non-qualified investments and just keep money in a savings account in case of unexpected large expenses or interruptions to your income, it may be better to instead move the money in the savings account to Interactive Brokers and invest it. Crucially, you can get a debit card from Interactive Brokers that allows you to spend on margin (borrow) at a low rate (~5%, much less than credit cards) using your investments there as collateral. That way you keep essentially all your money invested (presumably earning more than the savings account) while still having access to liquidity when you need it.
Just to be clear: we mostly don’t argue for the desirability or likelihood of lock-in, just its technological feasibility. Am I correctly interpreting your comment to be cautionary, questioning the desirability of lock-in given the apparent difficulty of doing so while maintaining sufficiently flexibility to handle unforeseen philosophical arguments?
AGI and Lock-In
If the Federal government is just buying, on the open market, an amount of coal comparable to how much would have been sold without government action, then it’s going to drive up the price of coal and increase the total amount of coal extracted. How much extra coal gets extracted depends on the supply and demand curves, and the amount of coal actually burned will almost certainly be less than in the world where the government didn’t act, but it does mean the environmental benefits of this plan will be significantly muted.
Paul Graham writes that Noora Health is doing something like this.
https://twitter.com/Jess_Riedel/status/1389599895502278659
https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/96773753706640817147890456629920587151705670001482122310561805592519359070209
Regarding your 4 criteria, I think they don’t really delineate how to make the sort of judgment calls we’re discussing here, so it really seems like it should be about a 5th criterion that does delineate that.
Sorry I was unclear. Those were just 4 desiderata that the criteria need to satisfy; the desiderata weren’t intended to fully specify the criteria.
If a small group of researchers at MIRI were trying to do work on verification but not getting much traction in the academic community, my intuition is that their papers would reliably meet your criteria.
Certainly possible, but I think this would partly be because MIRI would explicitly talk in their paper about the (putative) connection to TAI safety, which makes it a lot easier for me see. (Alternative interpretation: it would be tricking me, a non-expert, into thinking there was more of a substantive connection to TAI safety than actually is there.) I am trying not to penalize researchers for failing to talk explicitly about TAI, but I am limited.
I think it’s more likely the database has inconsistencies of the kind you’re pointing at from CHAI, Open AI, and (as you’ve mentioned) DeepMind, since these organizations have self-described (partial) safety focus while still doing lots of research non-safety and near-term-safety research. When confronted with such inconsistencies, I will lean heavily toward not including any of them since this seems like the only feasible choice given my resources. In other words, I select your final option: “The hypothetical MIRI work shouldn’t have made the cut”.I definitely agree that you shouldn’t just include every paper on robustness or verification, but perhaps at least early work that led to an important/productive/TAI-relevant line should be included
Here I understand you to be suggesting that we use a notability criterion that can make up for the connection to TAI safety being less direct. I am very open to this suggestion, and indeed I think an ideal database would use criteria like this. (It would make the database more useful to both researchers and donors.) My chief concern is just that I have no way to do this right now because I am not in a position to judge the notability. Even after looking at the abstracts of the work by Raghunathan et al. and Wong & Kolter, I, as a layman, am unable to tell that they are quite notable.
Now, I could certainly infer notability by (1) talking to people like you and/or (2) looking at a citation trail. (Note that a citation count is insufficient because I’d need to know it’s well cited by TAI safety papers specifically.) But this is just not at all feasible for me to do for a bunch of papers, much less every paper that initially looked equally promising to my untrained eyes. This database is a personal side project, not my day job. So I really need some expert collaborators or, at the least, some experts who are willing to judge batches of papers based on a some fixed set of criteria.
Sure, sure, we tried doing both of these. But they were just taking way too long in terms of new papers surfaced per hour worked. (Hence me asking for things that are more efficient than looking at reference lists from review articles and emailing the orgs.) Following the correct (promising) citation trail also relies more heavily on technical expertise, which neither Angelica nor I have.
I would love to have some collaborators with expertise in the field to assist on the next version. As mentioned, I think it would make a good side project for a grad student, so feel to nudge yours to contact us!
for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they’re on almost identical topics and some are even follow-ups to the Wong paper).
Yea, I’m saying I would drop most of those too.
I think focusing on motivation rather than results can also lead to problems, and perhaps contributes to organization bias (by relying on branding to asses motivation).
I agree this can contribute to organizational bias.
I do agree that counterfactual impact is a good metric, i.e. you should be less excited about a paper that was likely to soon happen anyways; maybe that’s what you’re saying? But that doesn’t have much to do with motivation.
Just to be clear: I’m using “motivation” here in the technical sense of “What distinguishes this topic for further examination out of the space of all possible topics?”, i.e., is the topic unusually likely to lead to TAI safety results down the line?” (It’s not anything to do with the author’s altruism or whatever.)
I think what would best advance this conversation would be for you to propose alternative practical inclusion criteria which could be contrasted the ones we’ve given.
Here’s how is how I arrived at ours. The initial desiderata are:
-
Criteria are not based on the importance/quality of the paper. (Too hard for us to assess.)
-
Papers that are explicitly about TAI safety are included.
-
Papers are not automatically included merely for being relevant to TAI safety. (There are way too many.)
-
Criteria don’t exclude papers merely for failure to mention TAI safety explicitly. (We want to find and support researchers working in institutions where that would be considered too weird.)
(The only desiderata that we could potentially drop are #2 or #4. #1 and #3 are absolutely crucial for keeping the workload manageable.)
So besides papers explicitly about TAI safety, what else can we include given the fact that we can’t include everything relevant to safety? Papers that TAI safety researchers are unusually likely (relative to other researchers) to want to read, and papers that TAI safety donors will want to fund. To me, that means the papers that are building toward TAI safety results more than most papers are. That’s what I’m trying to get across by “motivated”.
Perhaps that is still too vague. I’m very in your alternative suggestions!
-
Thanks Jacob. That last link is broken for me, but I think you mean this?
You sort of acknowledge this already, but one bias in this list is that it’s very tilted towards large organizations like DeepMind, CHAI, etc.
Well, it’s biased toward safety organizations, not large organizations. (Indeed, it seems to be biased toward small safety organizations over larges ones since they tend to reply to our emails!) We get good coverage of small orgs like Ought, but you’re right we don’t have a way to easily track individual unaffiliated safety researchers and it’s not fair.
I look forward to a glorious future where this database is so well known that all safety authors naturally send us a link to their work when its released, but for now the best way we have of finding papers is (1) asking safety organizations for what they’ve produced and (2) taking references from review articles. If you can suggest another option for getting more comprehensive coverage per hour of work we’d be very interested to hear it (seriously!).
For what it’s worth, the papers by Hendrycks are very borderline based on our inclusion criteria, and in fact I think if I were classifying it today I think I would not include it. (Not because it’s not high quality work, but just because I think it still happens in a world where no research is motivated by the safety of transformative AI; maybe that’s wrong?) For now I’ve added the papers you mention by Hendrycks, Wong, and Cohen to the database, but my guess is they get dropped for being too near-term-motivated when they get reviewed next year.
More generally, let me mention that we do want to recognize great work, but our higher priority is to (1) recognize work that is particularly relevant to TAI safety and (2) help donors assess safety organizations.
Thanks again! I’m adding your 2019 review to the list.
(“Optimism” is an unusual way to describe it :)
We’d also like to see this debated more. Getting expert participation is already challenging on a simple survey, but we’re hoping that publishing this will prompt skeptics (and believers) to more publicly articulate their arguments.