[Some of my high-level views on AI risk.]
[I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.]
[In this post I generally state what I think before updating on other people’s views – i.e., what’s sometimes known as ‘impressions’ as opposed to ‘beliefs.’]
Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on.
The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out.
Why I’m interested in TAI as a lever to improve the long-run future
I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular,
My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and
I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism, i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.”
Less standard but not highly unusual (within EA) high-level views I hold more tentatively:
The indirect long-run impacts of our actions are extremely hard to predict and don’t ‘cancel out’ in expectation. In other words, I think that what Greaves (2016) calls complex cluelessness is a pervasive problem. In particular, evidence that an action will have desirable effects in the short term generally is not a decisive reason to believe that this action would be net positive overall, and neither will we be able to establish the latter through any other means.
Increasing the relative influence of longtermist actors is one of the very few strategies we have good reasons to consider net positive. Shaping TAI is a particularly high-leverage instance of this strategy, where the main mechanism is reaping an ‘epistemic rent’ from having anticipated TAI earlier than other actors. I take this line of support to be significantly more robust than any particular story on how TAI might pose a global catastrophic risk including even broad operationalizations of the ‘value alignment problem.’
My empirical views on TAI
I think the strongest reasons to expect TAI this century are relatively outside-view-based (I talk about this century just because I expect that later developments are harder to predictably influence, not because I think a century is particularly meaningful time horizon or because I think TAI would be less important later):
We’ve been able to automate an increasing number of tasks (with increasing performance and falling cost), and I’m not aware of a convincing argument for why we should be highly confident that this trend will stop short of full automation – i.e., AI systems being able to do all tasks more economically efficiently than humans –, despite moderate scientific and economic incentives to find and publish one.
Independent types of weak evidence such as trend extrapolation and expert surveys suggest we might achieve full automation this century.
Incorporating full automation into macroeconomic growth models predicts – at least under some assumptions – a sustained higher rate of economic growth (e.g. Hanson 2001, Nordhaus 2015, Aghion et al. 2017), which arguably was the main driver of the welfare-relevant effects of the Industrial Revolution.
Accelerating growth this century is consistent with extrapolating historic growth rates, e.g. Hanson (2000).
I think there are several reasons to be skeptical, but that the above succeeds in establishing a somewhat robust case for TAI this century not being wildly implausible.
My impression is that I’m less confident than the typical longtermist EA in various claims around TAI, such as:
Uninterrupted technological progress would eventually result in TAI;
TAI will happen this century;
we can currently anticipate any specific way of positively shaping the impacts of TAI;
if the above three points were true then shaping TAI would be the most cost-effective way of improving the long-term future.
My guess is this is due to different priors, and due to frequently having found extant specific arguments for TAI-related claims (including by staff at FHI and Open Phil) less convincing than I would have predicted. I still think that work on TAI is among the few best shots for current longtermists.
Awesome post, Max, many thanks for this. I think it would be good if these difficult questions were discussed more on the forum by leading researchers like yourself.
I think you should post this as a normal post; it’s far too good and important to be hidden away on the shortform.
[deleted because the question I asked turned out to be answered in the comment, upon careful reading]
What’s the right narrative about global poverty and progress? Link dump of a recent debate.
The two opposing views are:
(a) “New optimism:”  This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that’s great.  In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.
Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely.
In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker’s books in his My worldview in 5 books post, saying that “Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism.”
(b) Hickel’s critique: Anthropologist Jason Hickel has criticized new optimism on two grounds:
1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that “real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless.”
2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality.
Link dump (not necessarily comprehensive)
If you only read two things, I’d recommend (1) Hasell’s and Roser’s article explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic.
By Hickel (i.e. against “new optimism”):
By “new optimists”:
Joe Hasell and Max Roser: https://ourworldindata.org/extreme-history-methods
Steven Pinker: https://whyevolutionistrue.wordpress.com/2019/01/31/is-the-world-really-getting-poorer-a-response-to-that-claim-by-steve-pinker/
Commentary by others:
Branko Milanovic (a leading economic historian): https://www.globalpolicyjournal.com/blog/11/02/2019/global-poverty-over-long-term-legitimate-issues
Dylan Matthews: https://www.vox.com/future-perfect/2019/2/12/18215534/bill-gates-global-poverty-chart
LW user ErickBall: https://www.lesswrong.com/posts/eTMgL7Cx8TsA9nedn/is-the-world-getting-better-a-brief-summary-of-recent-debate
I’m largely unpersuaded by Hickel’s charge that historic poverty data is invalid. Sure, it’s way less good than contemporary data. But based on Hasell’s and Roser’s article, my impression is that the data is better than I would have thought, and its orthodox analysis and interpretation more sophisticated than I would have thought. I would be surprised if access to better data would qualitatively change the “new optimist” conclusion.
I think there is room for debate over which indicators to use, and that Hickel makes some interesting points here. I find it regrettable that the debate around this seems so adversarial.
Still, my sense is that there is an important, true, and widely underappreciated (particularly by people on the left, including my past self) core of the “new optimist” story. I’d expect looking at other indicators could qualify that story, or make it less simplistic, point to important exceptions etc. - but I’d probably consider a choice of indicators that painted an overall pessimistic picture as quite misleading and missing something important.
On the other hand, I would quite strongly want to resist the conclusion that everything in this debate is totally settled, and that the new optimists are clearly right about everything, in the same way in which orthodox climate science is right about climate change being anthropogenic, or orthodox medicine is right about homeopathy not being better than placebo. But I think the key uncertainties are not in historic poverty data, but in our understanding of wellbeing and its relationship to environmental factors. Some examples of why I think it’s more complicated
The Easterlin paradox
The unintuitive relationship between (i) subjective well-being in the sense of the momentary affective valence of our experience on one hand and (ii) reported life satisfaction. See e.g. Kahneman’s work on the “experiencing self” vs. “remembering self”.
On many views, the total value of the world is very sensitive to population ethics, which is notoriously counterintuitive. In particular, on many plausible views, the development of the total welfare of the world’s human population is dominated by its increasing population size.
Another key uncertainty is the implications of some of the discussed historic trends for the value of the world going forward, about which I think we’re largely clueless. For example, what are the effects of changing inequality on the long-term future?
 It’s not clear to me if “new optimism” is actually new. I’m using Hickel’s label just because it’s short and it’s being used in this debate anyway, not to endorse Hickel’s views or make any other claim.
 There is an obvious problem with new optimism, which is that it’s anthropocentric. In fact, on many plausible views, the total axiological value of the world at any time in the recent past may be dominated by the aggregate wellbeing of nonhuman animals; even more counterintuitively, it may well be dominated by things like the change in the total population size of invertebrates. But this debate is about human wellbeing, so I’ll ignore this problem.
I agree that the world has gotten much better than it was.
There are two important reasons for this, the other improvements that we see mostly follow from them.
Energy consumption (is wealth)
The energy consumption per person has increased over the last 500 years and that increased consumption translates to welfare.
The amount of knowledge that we as humanity posses has increased dramatically, and that knowledge is widely accessible. 75% of kids finishing 9th grade, 12.5% finishing 6th grade, 4.65% less than 6th grade unfortunately around 7-8% kids have never gone to school. Education increases translate to increase in health, wealth (actually energy consumption) more in countries with market economies than non-market economies.
The various -isms (capitalism, socialism, communism, neoliberalism, colonialism, fascism) have very little to do with human development, and in fact have been very negative for human development. (I am skipping theory about how the -isms are supposed to work, and jumping to the actual effects).
“Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism.”
“Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism.”
Pinker has his critics, a sample at
The improvements in knowledge are secondary to the tapping of fossil fuels and the resulting energy consumption, which eventually caused the demographic transition.
Once the demographic transition happened, there are no young men willing to fight foreign wars and violence declined. i.e. outright occupation (colonialism) gave way to neocolonialism, and that is the world we find ourselves in today.
I find it hard to take any claims of “reason” and “humanism” seriously, while the world warms per capita consumption of fossil fuel is 10 times higher in USA than “developing” countries. Countries of the global south still have easily solvable problems like basic education and health that are under funded.
Richard A. Easterlin has a good understanding when he asks “Why Isn’t the Whole World Developed?” https://www.jstor.org/stable/2120886?seq=1
When downvoting please explain why
I just now saw this post, but I would guess that some readers wanted more justification for the use of the term “secondary”, which implies that you’re assigning value to both of (improvements in knowledge) and (tapping of fossil fuels) and saying that the negative value of the latter outweighs the value of the former. I’d guess that readers were curious how you weighed these things against each other.
I’ll also note that Pinker makes no claim that the world is perfect or has no problems, and that claiming that “reason” or “humanism” has made the world better does not entail that they’ve solved all the world’s problems or even that the world is improving in all important ways. You seem to be making different claims than Pinker does about the meaning of those terms, but you don’t explain how you define them differently. (I could be wrong about this, of course; that’s just what I picked up from a quick reading of the comment.)
Thanks Aaron for your response.
I am assigning positive value to both improvements in knowledge and increased energy use (via tapping of fossil fuel energy). I am not weighing them one vs the other. I am saying that without the increased energy from fossil fuels we would still be agricultural societies, with repeated rise and fall of empires. The indus valley civilization, ancient greeks, mayans all of the repeatedly crashed. At the peak of those civilizations I am sure art, culture and knowledge flourished. Eventually humans out ran their resources and crashed, the crash simplified art forms, culture, and knowledge was also lost.
The driver is energy, and the result is increased art, culture, knowledge and peace too.
Reason and humanism have very little to do with why our world is peaceful today (in the sense that outright murder, slavery, colonialism are no longer accepted).
I read the book by Pinker and his emphasis on Western thought and enlightenment was off putting. We are all human, there are no Western values or Eastern values.
Hans Rosling puts it beautifully
“There is no such thing as Swedish values. Those are modern values”
[On https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ ]
[ETA: After having talked to more people, it now seems to me that disagreeing on this point more often explains different reactions than I thought it would. I’m also now less confident that my impression that there wasn’t bad faith from the start is correct, though I think I still somewhat disagree with many EAs on this. In particular, I’ve also seen plenty of non-EA people who don’t plausibly have a “protect my family” reaction say the piece felt like a failed attempt to justify a negative bottom line that was determined in advance.] (Most of the following doesn’t apply in cases where someone is acting in bad faith and is determined to screw you over. And in fact I’ve seen the opposing failure mode of people assuming good faith for too long. But I don’t think this is a case of bad faith.)
I’ve seen some EAs react pretty negatively or angrily to that piece. (Tbc, I’ve also seen different reactions.) Some have described the article as a “hit piece”.
I don’t think it qualifies as a hit piece. More like a piece that’s independent/pseudo-neutral/ambiguous and tried to stick to dry facts/observations but in some places provides a distorted picture by failing to be charitable / arguably missing the point / being one-sided and selective in the observation it reports.
I still think that reporting like this is net good, and that the world would be better if there was more of it at the margin, even if it has flaws similarly severe to that one. (Tbc, I think there would have been a plausibly realistic/achievable version of that article that would have been better, and that there is fair criticism one can direct at it.)
To put it bluntly, I don’t believe that having even maximally well-intentioned and intelligent people at key institutions is sufficient for achieving a good outcome for the world. I find it extremely hard to have faith in a setup that doesn’t involve a legible system/structure with things like division of labor, checks and balances, procedural guarantees, healthy competition, and independent scrutiny of key actors. I don’t know if the ideal system for providing such outside scrutiny will look even remotely like today’s press, but currently it’s one of the few things in this vein that we have for nonprofits, and Karen Hao’s article is an (albeit flawed) example of it.
Whether this specific article was net good or not seems pretty debatable. I definitely see reasons to think it’ll have bad consequences, e.g. it might crowd out better reporting, might provide bad incentives by punishing orgs for trying to do good things, … I’m less wedded to a prediction of this specific article’s impact than to the broader frame for interpreting and reacting to it.
I find something about the very negative reactions I’ve seen worrying. I of course cannot know what they were motivated by, but some seemed like I would expect someone to react who’s personally hurt because they judge a situation as being misunderstood, feels like they need to defend themself, or like they need to rally to protect their family. I can relate to misunderstandings being a painful experience, and have sympathy for it. But I also think that if you’re OpenAI, or “the EA community”, or anyone aiming to change the world, then misunderstandings are part of the game, and that any misunderstanding involves at least two sides. The reactions I’d like to see would try to understand what has happened and engage constructively with how to productively manage the many communication and other challenges involved in trying to do something that’s good for everyone without being able to fully explain your plans to most people. (An operationalization: If you think this article was bad, I think that ideally the hypothesis “it would be good it we had better reporting” would enter your mind as readily as the hypothesis “it would be good if OpenAI’s comms team and leadership had done a better job”.)
[Is longtermism bottlenecked by “great people”?]
Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don’t have anyone in the EA community who can do X, (ii) the bottleneck for this isn’t credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more “great people”.
I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as “experience a utility-monster amount of pleasure” or “come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it”. But of course more great people don’t help with solving impossible tasks.
Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, or (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented “great people”, and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so.
My best guess for the genesis of the “we need more great people” perspective: Suppose I talk a lot to people at an organization that thinks there’s a decent chance we’ll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable projects with short feedback loops on some intermediate metrics (e.g. running some super-large-scale machine learning experiments), then I expect I would hear a lot of claims like “we really need someone who can do X”. I think it’s just a general property of a certain kind of fast-growing organization that’s doing practical things in the world that everything constantly seems like it’s on fire. But I would also expect that, if I poked a bit at these claims, it would usually turn out that X is something like “contribute to this software project at the pace and quality level of our best engineers, w/o requiring any management time” or “convince some investors to give us much more money, but w/o anyone spending any time transferring relevant knowledge”. If you see that things break because X isn’t done, even though something like X seems doable in principle (perhaps you see others do it), it’s tempting to think that what you need is more “great people” who can do X. After all, people generally are the sort of stuff that does things, and maybe you’ve actually seen some people do X. But it still doesn’t follow that in your situation “great people” are the bottleneck …
Curious if anyone has examples of tasks X for which the original claims seem in fact true. That’s probably the easiest way to convince me that I’m wrong.
I’m not quite sure how high your bar is for “experience”, but many of the tasks that I’m most enthusiastic about in EA are ones which could plausibly be done by someone in their early 20s who eg just graduated university. Various tasks of this type:
Work at MIRI on various programming tasks which require being really smart and good at math and programming and able to work with type theory and Haskell. Eg we recently hired Seraphina Nix to do this right out of college. There are other people who are recent college graduates who we offered this job to who didn’t accept. These people are unusually good programmers for their age, but they’re not unique. I’m more enthusiastic about hiring older and more experienced people, but that’s not a hard requirement. We could probably hire several more of these people before we became bottlenecked on management capacity.
Generalist AI safety research that Evan Hubinger does—he led the writing of “Risks from Learned Optimization” during a summer internship at MIRI; before that internship he hadn’t had much contact with the AI safety community in person (though he’d read stuff online).
Richard Ngo is another young AI safety researcher doing lots of great self-directed stuff; I don’t think he consumed an enormous amount of outside resources while becoming good at thinking about this stuff.
I think that there are inexperienced people who could do really helpful work with me on EA movement building; to be good at this you need to have read a lot about EA and be friendly and know how to talk to lots of people.
My guess is that EA does not have a lot of unidentified people who are as good at these things as the people I’ve identified.
I think that the “EA doesn’t have enough great people” problem feels more important to me than the “EA has trouble using the people we have” problem.
Thanks, very interesting!
I agree the examples you gave could be done by a recent graduate. (Though my guess is the community building stuff would benefit from some kinds of additional experience that has trained relevant project management and people skills.)
I suspect our impressions differ in two ways:
1. My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)
2. Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you’ve identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I’d probably think they should stop doing what they’re doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people—and then there is the large number of people neither of us has any information about.
Two other thoughts I had in response:
It might be quite relevant if “great people” refers only to talent or also to beliefs and values/preferences. E.g. my guess is that there are several people who could be great at functional programming who either don’t want to work for MIRI, or don’t believe that this would be valuable. (This includes e.g. myself.) If to count as “great person” you need to have the right beliefs and preferences, I think your claim that “EA needs more great people” becomes stronger. But I think the practical implications would differ from the “greatness is only about talent” version, which is the one I had in mind in the OP.
One way to make the question more precise: At the margin, is it more valuable (a) to try to add high-potential people to the pool of EAs or (b) change the environment (e.g. coordination, incentives, …) to increase the expected value of activities by people in the current pool. With this operationalization, I might actually agree that the highest-value activities of type (a) are better than the ones of type (b), at least if the goal is finding programmers for MIRI and maybe for community building. (I’d still think that this would be because, while there are sufficiently talented people in EA, they don’t want to do this, and it’s hard to change beliefs/preferences and easier to get new smart people excited about EA. - Not because the community literally doesn’t have anyone with a sufficient level of innate talent. Of course, this probably wasn’t the claim the person I originally talked to was making.)
My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)
I don’t think that peculiarities of what kinds of EA work we’re most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people’s views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.
Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you’ve identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I’d probably think they should stop doing what they’re doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people—and then there is the large number of people neither of us has any information about.
I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you’re reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. Could you name a profile of such a person, and which of the types of work I named you think they’d maybe be as good at as the people I named?
It might be quite relevant if “great people” refers only to talent or also to beliefs and values/preferences
I am not intending to include beliefs and preferences in my definition of “great person”, except for preferences/beliefs like being not very altruistic, which I do count.
E.g. my guess is that there are several people who could be great at functional programming who either don’t want to work for MIRI, or don’t believe that this would be valuable. (This includes e.g. myself.)
I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it? (To be clear I have no idea how good you’d be at programming for MIRI because I barely know you, and so I’m just talking about priors rather than specific guesses about you.)
For what it’s worth, I think that you’re not credulous enough of the possibility that the person you talked to actually disagreed with you—I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.
I agree we have important disagreements other than what kinds of EA work we’re most enthusiastic about. While not of major relevance for the original issue, I’d still note that I’m surprised by what you say about various other people’s view on EA, and I suspect it might not be true for me: while I agree there are some highly-valuable tasks that could be done by recent undergrads, I’d guess that if I made a list of the most valuable possible contributions then a majority of the entries would require someone to have a lot of AI-weighted generic influence/power (e.g. the kind of influence over AI a senior government member responsible for tech policy has, or a senior manager in a lab that could plausibly develop AGI), and that because of the way relevant existing institutions are structured this would usually require a significant amount of seniority. (It’s possible for some smart undergrads to embark on a path culminating in such a position, but my guess this is not the kind of thing you had in mind.)
I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you’re reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. [...]
I don’t think these two claims are plausibly consistent, at least if “people like Evan” is also meant to exclude beliefs and preferences: For instance, if someone with Evan-level abilities doesn’t believe that thinking in their own time and putting results online is a worthwhile thing to do, then the identification mechanism you appeal to will fail. More broadly, someone’s actions will generally depend on all kinds of beliefs and preferences (e.g. on what they are able to do, on what people around them expect, on other incentives, …) that are much more dependent on the environment than relatively “innate” traits like fluid intelligence. The boundary between beliefs/preferences and abilities is fuzzy, but as I suggested at the end of my previous comment, I think for the purpose of this discussion it’s most useful to distinguish changes in value we can achieve (a) by changing the “environment” of existing people vs. (b) by adding more people to the pool.
Could you name a profile of such a person, and which of the types of work I named you think they’d maybe be as good at as the people I named?
What do you mean by “profile”? Saying what properties they have, but without identifying them? Or naming names or at least usernames? If the latter, I’d want to ask the people if they’re OK with me naming them publicly. But in principle happy to do either of these things, as I agree it’s a good way to check if my claim is plausible.
I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it?
Maybe. When I said “they might be great”, I meant something roughly like: if it was my main goal to find people great at task X, I’d want to invest at least 1-10 hours per person finding out more about how good they’d be at X (this might mean talking to them, giving them some sort of trial tasks etc.) I’d guess that for between 5 and 50% of these people I’d eventually end up concluding they should work full-time doing X or similar.
Also note that originally I meant to exclude practice/experience from the relevant notion of “greatness” (i.e. it just includes talent/potential). So for some of these people my view might be something like “if they did 2 years of deliberate practice, they then would have a 5% to 50% chance of meeting the bar for X”. But I know think that probably the “marginal value from changing the environment vs. marginal value from adding more people” operationalization is more useful, which would require “greatness” to include practice/experience to be consistent with it.
If we disagree about the bar, I suspect that me having bad models about some of the examples you gave explains more of the disagreement than me generally dismissing high bars. “Functional programming” just doesn’t sound like the kind of task to me with high returns to super-high ability levels, and similar for community building; but it’t plausible that there are bundles of tasks involving these things where it matters a lot if you have someone whose ability is 6 instead of 5 standard deviations above the mean (not always well-defined, but you get the idea). E.g. if your “task” is “make a painting that will be held in similar regards as the Mona Lisa” or “prove P != NP” or “be as prolific as Ramanujan at finding weird infinite series for pi”, then, sure, I agree we need an extremely high bar.
Thanks for pointing this out. FWIW, I think there likely is both substantial disagreement between me and that person and that I misunderstood their view in some ways.
[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]
1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) ‘structure’ and (b) ‘content’. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actors the mechanism consists of; e.g., whether some key AI lab is a nonprofit or a publicly traded company; who would decide by what rules/voting scheme how Windfall profits would be redistributed; etc. By (b) I mean something like how much the CEO of a key firm, or their advisors, care about the long-term future. -- I can see why relying mostly on (b) is attractive, e.g. it’s arguably more tractable; however, some EA thinking (mostly from the Bay Area / the rationalist community to be honest) strikes me as focusing on (b) for reasons that seem ahistoric or otherwise dubious to me. So I don’t feel convinced that what I perceive to be a very stark focus on (b) is warranted. I think that figuring out if there are viable strategies that rely more on (a) is better done from within institutions that have no ties with key TAI actors, and also might be best done my people that don’t quite match the profile of the typical new EA that got excited about Superintelligence or HPMOR. Overall, I think that making more academic research in broadly “policy relevant” fields happen would be a decent strategy if one ultimately wanted to increase the amount of thinking on type-(a) theories of impact.
2. What’s the theory of impact if TAI happens in more than 20 years? More than 50 years? I think it’s not obvious whether it’s worth spending any current resources on influencing such scenarios (I think they are more likely but we have much less leverage). However, if we wanted to do this, then I think it’s worth bearing in mind that academia is one of few institutions (in a broad sense) that has a strong track record of enabling cumulative intellectual progress over long time scales. I roughly think that, in a modal scenario, no-one in 50 years is going to remember anything that was discussed on the EA Forum or LessWrong, or within the OpenAI policy team, today (except people currently involved); but if AI/TAI was still (or again) a hot topic then, I think it’s likely that academic scholars will read academic papers by Dafoe, his students, the students of his students etc. Similarly, based on track records I think that the norms and structure of academia are much better equipped than EA to enable intellectual progress that is more incremental and distributed (as opposed to progress that happens by way of ‘at least one crisp insight per step’; e.g. the Astronomical Waste argument would count as one crisp insight); so if we needed such progress, it might make sense to seed broadly useful academic research now.
My view is closer to “~all that matters will be in the specifics, and most of the intuitions and methods for dealing with the specifics are either sort of hard-wired or more generic/have different origins than having thought about race models specifically”. A crux here might be that I expect most of the tasks involved in dealing with the policy issues that would come up if we got TAI within the next 10-20 years to be sufficiently similar to garden-variety tasks involved in familiar policy areas that as a first pass: (i) if theoretical academic research was useful, we’d see more stories of the kind “CEO X / politician Y’s success was due to idea Z developed through theoretical academic research”, and (ii) prior policy/applied strategy experience is the background most useful for TAI policy, with usefulness increasing with the overlap in content and relevant actors; e.g.: working with the OpenAI policy team on pre-TAI issues > working within Facebook on a strategy for how to prevent the government to split up the firm in case a left-wing Democrat wins > business strategy for a tobacco company in the US > business strategy for a company outside of the US that faces little government regulation > academic game theory modeling. That’s probably too pessimistic about the academic path, and of course it’ll depend a lot on the specifics (you could start in academia to then get into Facebook etc.), but you get the idea.
Overall, the only somewhat open question for me is whether ideally we’d have (A) ~only people working quite directly with key actors or (B) a mix of people working with key actors and more independent ones e.g. in academia. It seems quite clear to me that the optimal allocation will contain a significant share of people working with key actors [...]
If there is a disagreement, I’d guess it’s located in the following two points:
(1a) How big are countervailing downsides from working directly with, or at institutions having close ties with, key actors? Here I’m mostly concerned about incentives distorting the content of research and strategic advice. I think the question is broadly similar to: If you’re concerned about the impacts of the British rule on India in the 1800s, is it best to work within the colonial administration? If you want to figure out how to govern externalities from burning fossil fuels, is it best to work in the fossil fuel industry? I think the cliche left-wing answer to these questions is too confident in “no” and is overlooking important upsides, but I’m concerned that some standard EA answers in the AI case are too confident in “yes” and are overlooking risks. Note that I’m most concerned about kind of “benign” or “epistemic” failure modes: I think it’s reasonably easy to tell people with broadly good intentions apart from sadists or even personal-wealth maximizers (at least in principle—if this will get implemented is another question); I think it’s much harder to spot cases like key people incorrectly believing that it’s best if they keep as much control for themselves/their company as possible because after all they are the ones with both good intentions and an epistemic advantage (note that all of this really applies to a colonial administration with little modification, though here in cases such as the “Congo Free State” even the track record of “telling personal-wealth maximizers apart from people with humanitarian intentions” maybe isn’t great—also NB I’m not saying that this argument would necessarily be unsound; i.e. I think that in some situations these people would be correct).
(1b) To what extent to we need (a) novel insights as opposed to (b) an application of known insights or common-sense principles? E.g., I’ve heard claims that the sale of telecommunication licenses by governments is an example where post-1950 research-level economics work in auction theory has had considerable real-world impact, and AFAICT this kind of auction theory strikes me as reasonably abstract and in little need of having worked with either governments or telecommunication firms. Supposing this is true (I haven’t really looked into this), how many opportunities of this kind are there in AI governance? I think the case for (A) is much stronger if we need little to no (a), as I think the upsides from trust networks etc. are mostly (though not exclusively) useful for (b). FWIW, my private view actually is that we probably need very little of (a), but I also feel like I have a poor grasp of this, and I think it will ultimately come down to what high-level heuristics to use in such a situation.
I found this really fascinating to read. Is there any chance that you might turn it into a “coherent text” at some point?
I especially liked the question on possible downsides of working with key actors; orgs in a position to do this are often accused of collaborating in the perpetuation of bad systems (or something like that), but rarely with much evidence to back up those claims. I think your take on the issue would be enlightening.
Thanks for sharing your reaction! There is some chance that I’ll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it’s maybe only 15% likely that I’ll think this is the best use of my time within the next 6 months.
[Epistemic status: speculation based on priors about international organizations. I know next to nothing about the WHO specifically.]
[On the WHO declaring COVID-19 a pandemic only (?) on March 12th. Prompted by this Facebook discussion on epistemic modesty on COVID-19.]
- [ETA: this point is likely wrong, cf. Khorton’s comment below. However, I believe the conclusion that the timing of WHO declarations by itself doesn’t provide a significant argument against epistemic modesty still stands, as I explain in a follow-up comment below.] The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make.
- The optimal time for the WHO to declare a pandemic is primarily determined by these legal and institutional consequences. Whether COVID-19 is or will in fact be a pandemic in the everyday or epidemiological sense is an important input into the decision, but not a decisive one.
- Without familiarity with the WHO and the legal and institutional system it is a part of, it is very difficult to accurately assess the consequences of the WHO declaring a pandemic. Therefore, it is very hard to evaluate the timing of the WHO’s declaration without such familiarity. And being even maximally well-informed about COVID-19 itself isn’t even remotely sufficient for an accurate evaluation.
- The bottom line is that the WHO officially declaring that COVID-19 is a pandemic is a totally different thing from any individual persuasively arguing that COVID-19 is or will be a pandemic. In a language that would accurately reflect differences in meaning, me saying that COVID-19 is a pandemic and the WHO declaring COVID-19 is a pandemic would be done using different words. It is simply not the primary purpose of this WHO speech act to be an early, accurate, reliable, or whatever indicator of whether “COVID-19 is a pandemic”, to predict its impact, or any other similar thing. It isn’t primarily epistemic in any sense.
- If just based on information about COVID-19 itself someone confidently thinks that the WHO ought to have declared a pandemic earlier, they are making a mistake akin to the mistake reflected by answering “yes” to the question “could you pass me the salt?” without doing anything.
So did the WHO make a mistake by not declaring COVID-19 to be a pandemic earlier, and if so how consequential was it? Well, I think the timing was probably suboptimal just because my prior is that most complex institutions aren’t optimized for getting the timing of such things exactly right. But I have no idea how consequential a potential mistake was. In fact, I’m about 50-50 on whether the optimal time would have been slightly earlier or slightly later. (Though substantially earlier seems significantly more likely optimal than substantially later.)
“The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make.”
Are you sure about this? I’ve read that there aren’t major implications to it being officially declared a pandemic.
This article suggests there aren’t major changes based on ‘pandemic’ status https://www.bbc.co.uk/news/world-51839944
[Epistemic status: info from the WHO website and Wikipedia, but I overall invested only ~10 min, so might be missing something.]
It seems my remarks do apply for “public health emergency of international concern (PHEIC)” instead of “pandemic”. For example, from Wikipedia:
Under the 2005 International Health Regulations (IHR), states have a legal duty to respond promptly to a PHEIC.
[Note by me: The International Health Regulations include multiple instances of “public health emergency of international concern”. By contrast, they include only one instance of “pandemic”, and this is in the term “pandemic influenza” in a formal statement by China rather than the main text of the regulation.]
The WHO declared a PHEIC due to COVID-19 on January 30th.
The OP was prompted by a claim that the timing of the WHO using the term “pandemic” provides an argument against epistemic modesty. (Though I appreciate this was less clear in the OP than it could have been, and maybe it was a bad idea to copy my Facebook comment here anyway.) From the Facebook comment I was responding to:
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.
Since the WHO declaring a PHEIC seems much more consequential than them using the term “pandemic”, the timing of the PHEIC declaration seems more relevant for assessing the merits of the WHO response, and thus for any argument regarding epistemic modesty.
Since the PHEIC declaration happened significantly earlier, any argument based on the premise that it happened too late is significantly weaker. And whatever the apparent initial force of this weaker argument, my undermining response from the OP still applies.
So overall, while the OP’s premise appealing to major legal/institutional consequences of the WHO using the term “pandemic” seems false, I’m now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of “informed amateurs” over experts on COVID-19.
About declaring it a “pandemic,” I’ve seen the WHO reason as follows (me paraphrasing):
«Once we call it a pandemic, some countries might throw up their hands and say “we’re screwed,” so we should better wait before calling it that, and instead emphasize that countries need to try harder at containment for as long as there’s still a small chance that it might work.»
Yeah, I think that’s a good point.
I’m not sure I can have updates in favor or against modest epistemology because it seems to me that my true rejection is mostly “my brain can’t do that.” But if I could have further updates against modest epistemology, the main Covid-19-related example for me would be how long it took some countries to realize that flattening the curve instead of squishing it is going to lead to a lot more deaths and tragedy than people seem to have initially thought. I realize that it’s hard to distinguish between what’s actual government opinion versus what’s bad journalism, but I’m pretty confident there was a time when informed amateurs could see that experts were operating under some probably false or at least dubious assumptions. (I’m happy to elaborate if anyone’s interested.)
Also, predicting that something will be pretty bad or will be a pandemic is not the same as saying it is now a pandemic. When did it become a pandemic according to the WHO’s definition?
Expanding a quote I found on the wiki page in the transcript here from 2009:
Dr Fukuda: An easy way to think about pandemic – and actually a way I have some times described in the past – is to say: a pandemic is a global outbreak. Then you might ask yourself: “What is a global outbreak”? Global outbreak means that we see both spread of the agent – and in this case we see this new A(H1N1) virus to most parts of the world – and then we see disease activities in addition to the spread of the virus. Right now, it would be fair to say that we have an evolving situation in which a new influenza virus is clearly spreading, but it has not reached all parts of the world and it has not established community activity in all parts of the world. It is quite possible that it will continue to spread and it will establish itself in many other countries and multiple regions, at which time it will be fair to call it a pandemic at that point. But right now, we are really in the early part of the evolution of the spread of this virus and we will see where it goes.
But see also WHO says it no longer uses ‘pandemic’ category, but virus still emergency from February 24, 2020.
Thank you for pointing this out! It sounds like my guess was probably just wrong.
My guess was based on a crude prior on international organizations, not anything I know about the WHO specifically. I clarified the epistemic status in the OP.