Depending of different attitudes towards questions like take-off speed, people argue that with the development of AGI we will face situations of world GDP doubling days/weeks/a few years (with the number of years shriking with each further doubling). Many peoples’s timelines here seem to be quite broad, including quite commonly expectations like “AGI within the next 2-3 decades very likely”.
How the global world order politically as well as economically will change over the next decades is a quite extensively discussed topic in public as well as academia, with many goals and forecasts made until years like 2050 or 2070 (“climate neutral 2050”, “china’s economy in 30 years”). Barely is AGI mentioned in economics classes, political research papers and the like, despite its apparent impact of making any politics redundant and throwing over any economic forecasts. If AGI was even significantly less mighty than we think and there was even just a 20% chance of it occuring in the next 3 decades, that should be the number one single factor debated in every single argument on any economic/political topic with medium-length scope. Why, do you think, is it the case, that AGI is comparatively so rarely a topic there?
My motivated reasoning would immediately come up with explanations along the lines of
people in these disciplines are just not so much aware of AI developments
any forecasts/plans made assuming short timelines and fast takeoff speeds are useless anyways, so it makes sense to just assume longer timelines
Maybe I am just not noticing the omnipresence of AGI debate in economic/political long-term discourse
@1 seems unreasonable, because as soon as the first AI-economics people would come up with these arguments, if they were reasonable, they would become mainstream
@2 if that assumption was consciously made, I’d expect to hear this more often as side note
@3 hard to argue against, given it assumes I don’t see the discourse. But I regularity engage with media/content from the UN on their SDGs, have taken some Economics/IR/Politics electives, try to be a somewhat informed citicien and have friends studying these things, and I barely see AI suddenly speeding up things in any forecasts or discussions
Why might this be the case?
To me it seems like either mainstream academia, global institutions and public discourse heavily miss something or we tech/ea/ai people are overly biased in the actual relevance of our own field (I’m CS student)?
There are two totally valid conclusions to draw from the structure you’ve drawn up: that CS people or EA people are deluded, or that the world at large, including extremely smart people, is extremely bad at handling weird or new things.
Empirics has come to dominate econ (all but 1? of the recent American economic review articles were empiric-focused). Where’s the data for a treatment effect of a treatment that hasn’t occurred? Forecasts and expert predictions are not very stylish either, compared to “true causal effects.” I don’t know the best approach to solve this and am open to ideas
I suspect this is because there isn’t a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.
1 seems the most plausible to me. Reasonable arguments might eventually become mainstream, but that doesn’t mean they would do so immediately.
In particular (a) there may not be many AI-economics people, so the signal could get lost in the noise and (b) economics journals may tend to favour research that focuses on established topics or that uses clever methodology, rather than topics that are important/valuable.
I agree strongly! It would be interesting to research how economists have looked upon the creation of the internet. I guess that there is in fact little research on how the internet would change the world pre-1990.
Matt—I also strongly agree with this. I worked in an economics department for 4 years, as the token psychologist. Academic economics is largely obsessed with getting clever theorems and quantitative results published in one of the few highest-impact econ journals. Anything that challenges core assumptions in econ (e.g. the Rational Man hypothesis, rapid convergence on Nash equilibrium play in complex games, importance of positional goods in advanced market economies, continuity in rates of economic growth) is usually rejected, because it would undermine one’s chances of getting that American Economic Review paper accepted, and it would distract from the tenure track....
There’s a lot going on here, but this person’s take on economics seems bad, and also is a reductive take that is very common.
To calibrate, this take is actually incredibly similar how someone critiquing EA would say “all EA is obsessed with esoteric philosophical arguments and captured by AI and the billionaire donors”[1].
Reasons:
A decent chunk of economics is concerned with meta-economics and disliking economics. These ideas are held by many of the key figures.
In addition to these negative meta views, which are common, entire subdisciplines or alt schools are concerned with worldview changes in economics
See behavioral economics (which is well, sort of not super promising because it seems to be a repacking of anecdotes/psychology)
See heterodox economics (which also does poorly basically for similar reasons as above, as well as challenges with QA/QC/talent supply, because diluting disciplines wholesale doesn’t really work)
As a plus, economics has resisted the extremes of the culture wars and kept its eye on the work, while internally, to most students and some faculty, IMO proportionately giving disadvantaged people some equity or leg up (obviously not complete).
The effort/practice given to diversity is pretty similar to the level EA orgs have chosen and I suspect that’s not a coincidence.
Economics has avoided the replication crisis, the event which drives a lot of negative opinion about mainstream science in EA
To me, it’s obvious economics would...it’s very hard to communicate why, e.g. to show EAs the environment in an empirical seminar (argument in labor economics between senior faculty)
The amount of respect senior/mainstream economics give to reality and talking to people on the ground in empirical matters is large, and many ideas about unobserved/quality/social models has come out of this (although these ideas themselves can be attacked as repackaging the obvious)
Some work in economics like environmental economics (not the same as “Ecological economics” , which is one of the unpromising heterodox schools) and practical work like kidney donation (Al Roth) are highly cherished by almost all economists
Health economics and developmental economics is basically the entire cornerstone of the most concrete/publicized sector of EA, that is, global health, e.g. GiveWell.
GiveDirectly was literally founded and driven by economists, the entire methodology/worldview is an economics one.
https://www.givedirectly.org/research-at-give-directly/
There’s (a lot) more but I have to do some work and I got tired of writing
The issues with economics are similar/literally isomorphic/and in one case identical with EA (the math, dissenting subcultures and decision theory). I always wanted to write up but it seemed ancillary, hard to do well (for the social reality reasons) and embarrassing in multiple ways.
I don’t usually add this, but writing this, because this person seems to be setting themselves up for a bit of a public figure role in EA, and mentions credentials a bit:
Some of the content in previous comments and this comment isn’t a great update in regards to those goals above and I would tap the brakes here. In this comment, it’s not the general take in the content itself (negative takes on economics are fine and good ones are informative) but the intellectual depth/probably quality of the context of these specific ideas (“Rational Man hypothesis, rapid convergence on Nash equilibrium play in complex games, importance of positional goods in advanced market economies, continuity in rates of economic growth”) patterns matches not great.
Hi Charles, you seem to be putting a lot of weight on a short, quick note that I made as a comment on a comment on an EA Forum post, based on my personal experiences in an Econ department (I wasn’t ‘mentioning credentials’, I was offering observations based on experience).
(You also included some fairly vague criticisms of my previous posts and comments that could be construed as rather ad hominem.)
You are correct that there are many subfields within Econ, some of which challenge standard models, and that Econ has some virtues that other social sciences often don’t have. Fair enough.
The question remains: why is Econ largely ignoring the likely future impact of AI on the economy (apart from some specific issues such as automation and technological unemployment), and treating most of the economy as likely to carry on more or less as it is today?
Matt and I offered some suggestions based on what we see as a few intellectual biases and blind spots in Econ. Do you have any other suggestions?
You can just as easily say that global institutions are biased about the relevance of their own fields, and I think that is a good enough explanation: Traditional elite fields (press, actors, lawyers, autocrats) don’t teach AI, and so can’t influence the development of AGI. To perform the feats of confidence that gains or defends career capital in those fields, or to win the agreement and flattery of their peers, they have to avoid acknowledging that AGI is important, because if it’s important, then none of them are important.
But, I think this dam will start to break, generally. Economists know better than the other specializations, they have the background in decision theory to know what superintelligence will mean, and they see what’s happening in industry. Military is also capable of sometimes recognizing and responding to emerging risks. They’re going to start to speak up, and then maybe the rest of the elite will have to face it.
A lot of good potential answers are discussed in this earlier post’s comments. My favored explanation is because AGI is a minority concern even within CS academia, so we shouldn’t expect it to have much impact outside.
I think it’s 1. Plenty of ideas are reasonable but not mainstream. E.g. the idea to not attack Ukraine in Russia. The experts probably don’t get into the technical arguments and dismiss AGI as hype.
For the same reason that e.g. net electricity generation from fusion power is not the “number one single factor debated in every single argument on any economic/political topic with medium-length scope”: Until it exists, it is fictional – why should everyone focus so much on fictional technology? It remains a narrow, academic field. The difference is that there is actual progress towards fusion.