Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
How is it that after this being on top of the EA agenda for the better part of the last decade we still have only 300 people working on this?
Yeah, it’s a good question! Some thoughts:
I’m being quite strict with my definitions. I’m only counting people working directly on AI safety. So, for example, I wouldn’t count the time I spent writing this profile on AI (or anyone else who works at 80k for that matter). (Note: I do think lots of relevant work is done by people who don’t directly work on it) I’m also not counting people who think of themselves as on an AI safety career path and are, at the moment, skilling up rather than working directly on the problem. There are some ambiguities, e.g. are the ops team of an AI org working on safety? In general though these ambiguities seem much lower than the error in the data itself.
AI safety is hugely neglected outside EA (which is a key reason why it seems so useful to work on). This isn’t a big surprise and may be in large part a result of the fact that it used to be even more neglected, which means that anything that is started as an AI safety org is likely to have been started by EAs, so is also seen as an EA org. Which makes AI safety a subset of EA rather than the other way round.
Also, I’m looking at AI existential safety rather than broader AI ethics or AI safety issues. The focus on x-risk (combined with reasons to think that lots of work on AI non-existential safety isn’t that relevant—as compared with e.g. bio where lots of policy work for example is relevant to major pandemics and existential pandemics) makes it even more likely that this is just looking at a strict subset of EAs
There are I think up to around 10 thousand engaged EAs—of those maybe 1-2 thousand are longtermism or x-risk focused. So we’re looking at 10% of these people working full-time on AI x-risk! Seems like a pretty high proportion to me given the various causes in the wider EA (not even longtermist) community.
So in many ways the question of “how are so few people working on AI safety after 10 years” is similar to “how are there so few EAs after 10 years”, which is a pretty complicated question. But it seems to me like EA is way way way bigger and more influential than I would ever have expected in 2012!
There are also some other bottlenecks (notably mentoring capacity). The field was nearly non-existent 10 years ago, with very few senior people to help others enter the field – and it’s (rightly) a very technical field, focused on theoretical and practical computer science / ML. Even now, the proportion of time those 300 people should be spending mentoring is very much unclear to me.
I’d also like to highlight the footnote alongside this number: “There’s a lot of subjective judgement in the estimate (e.g. “does it seem like this research agenda is about AI safety in particular?”), and it could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area. My 90% confidence interval would range from around 100 people to around 1,500 people.”
Commenting as I’d also like to see a response to this. I guess it depends how they define ‘working directly’, perhaps emphasizing certain orgs? I am not focussed on AI myself, but I have spoken to loads of EAs who have an AI focus (even if nobody is doing this outside of EA) that this number seems surprisingly low. Not to say it isn’t neglected!
Amazing post. Really good clear write-up for the lay reader new to AI. I feel confident to share this.
One point where I worry that readers could take away the wrong impression is with the line that “we’re not yet at the point of knowing what policies would be useful to implement”.
I agree with you that “we are in the early stages of figuring out the shape of this problem [AI governance] and the most effective ways to tackle it” but I worry saying we don’t yet know what policies to advocate for (a fairly common trope among non-policy AI people in EA) gives a number of misleading impressions. It implies that AI policy advocacy work has no value at present, that people working on AI policy don’t know what they are doing and shouldn’t currently be working in this area. I think this is wrong. I think governments are putting AI polices in place now and if we refuse to engage then we risk missing opportunities to make things better and there are clear cases where we know what better policy and worse policy looks like.
Lets take one example directly from your own post. You article says “If we could successfully ‘sandbox’ an advanced AI — that is, contain it to a training environment with no access to the real world until we were very confident it wouldn’t do harm — that would help our efforts to mitigate AI risks tremendously.” That is a policy! Right now the US government is producing non-binding guidance for AI companies on how to manage the risks from AI. I am involved on some ongoing work to encourage this guidance to say that AI systems that A] can self-improve and B] present risks if they go wrong, should be sandboxed and tested. I don’t at all think it is your intention to imply that EA should miss a policy opportunity to get AI companies to consider sandboxing (a thing you strongly agree with). But I worry that some non-policy people I talk to in the EA community seem to have views that approximate this level of dismissal for all current AI policy advocacy work (e.g. see views of funders here and here).
Note on other AI policies. I suggest a few things to focus on at point 3 here. There is the x-risk database of 250+ policy proposals here. There is work on policy ideas in Future Proof here. Etc.
Thanks a lot for this profile!
It leaves me with a question: what is the possibility that the work outlined in the article makes things worse rather than better? These concerns are fleshed out in more details in this question and its comment threads, but the TL;DR is:
AI safety work is difficult: there are lots of hypotheses, experiments are hard to design, we can’t do RCTs to measure whether it works, etc. Thus, there is uncertainty even about the sign of the impact.
AI safety work could plausibly speed up AI development, create information hazards, be used for greenwashing regular AI companies… thereby increasing rather than decreasing AI risk.
I’d love to see a discussion of this concern, for example in the form of an entry under “Arguments against working on AI risk to which we think there are strong responses”, or some content about how to make sure that the work is actually beneficial.
Final note: I hope this didn’t sound too adversarial. My question is not meant as a critique of the article, but rather a genuine question that makes me hesitant to switch to AI safety work.
(Responding on Benjamin’s behalf, as he’s away right now):
Agree that it’s hard to know what works in AI safety + it’s easy to do things that make things worse rather than better. My personal view is that we should expect the field of AI safety to be overall good because people trying to optimise for a thing will overall move things in its direction in expectation even if they sometimes move away from it by mistake. It seems unlikely that the best thing to do is nothing, given that AI capabilities are racing forward regardless.
I do think that the difficulty of telling what will work is a strike against pursuing a career in this area, because it makes the problem less tractable, but it doesn’t seem decisive to me.
Agree that a section on this could be good!
I appreciate the response, and I think I agree with your personal view, at least partially. “AI capabilities are racing forward regardless” is a strong argument, and it would mean that AI safety’s contribution to AI progress would be small, in relative terms.
That said, it seems that the AI safety field might be particularly prone to work that’s risky or neutral, for example:
Interpretability research: interpretability is a quasi-requirement for deploying powerful models. Research in this direction is likely to produce tools that increase confidence in AI models and lead to more of them being deployed, earlier.
Robustness research: Similar to interpretability, robustness is a very useful property of all AI models. It makes them more applicable and will likely increase use of AI.
AI forecasting: Probably neutral, maybe negative since it creates buzz about AI and increases investments.
It’s puzzling that there is much concern about AI risk, and yet little awareness of the dual-use nature of all AI research. I would appreciate a stronger discussion about how we can make AI actually more safe, as opposed to more interpretable, more robust, etc.
I think these are all great points! We should definitely worry about negative effects of work intended to do good.
That said here are two other places where maybe we have differing intuitions:
You seem much more confident than I am that work on AI that is unrelated to AI safety is in fact negative in sign.
It seems hard to conclude that the counterfactual where any one or more of “no work on AI safety / no interpretability work / no robustness work / no forecasting work” were true is in fact a world with less x-risk from AI overall. That is, while I can see there are potential negative effects of these things, when I truly try to imagine the counterfactual, the overall impact seems likely positive to me.
Of course, intuitions like these are much less concrete than actually trying to evaluate the claims , and I agree it seems extremely important for people evaluating or doing anything in AI safety to ensure they’re doing positive work overall.
Thanks for pointing out these two places!
Work on AI drives AI risk. This is not equally true of all AI work, but the overall correlation is clear. There are good arguments that AI will not be aligned by default, and that current methods can produce bad outcomes if naively scaled up. These are cited in your problem profile. With that in mind, I would not say that I’m confident that AI work is net-negative… but the risk of negative outcomes is too large to feel comfortable.
A world with more interpretability / robustness work is a world where powerful AI arrives faster (maybe good, maybe bad, certainly risky). I am echoing section 2 of the problem profile, which argues that the sheer speed of AI advances is cause for concern. Moreover, because interpretability and robustness work advances AI, traditional AI companies are likely to pursue such work even without an 80000hours problem profile. This could be an opportunity for 80000hours to direct people to work that is even more central to safety.
As you say, these are currently just intuitions, not concretely evaluated claims. It’s completely OK if you don’t put much weight on them. Nevertheless, I think these are real concerns shared by others (e.g. Alexander Berger, Michael Nielsen, Kerry Vaughan), and I would appreciate a brief discussion, FAQ entry, or similar in the problem profile.
And now I’ll stop bothering you :) Thanks for having written the problem profile. It’s really nice work overall.
Heads up that some of your links (e.g. those in “full table of contents”) go to a page that reads: “Sorry, you are not allowed to preview drafts.”
Amazing to see this out. Really excited to read it!!! :-)
Ah thanks :) Fixed.
I’m a fan of the profile, especially the section on ” What do we think are the best arguments we’re wrong?”. I thought this was well done and clearly explained.
One important category that I don’t remember seeing is on wider arguments against existential risk being a priority. E.g. in my experience with 16-18 year olds in the UK, a very common response to Will MacAskill’s Ted talk (that they saw in the application process) was disagreement that the future was actually on track to be positive (and hence worth saving).
More anecdotally, something that I’ve experienced in numerous conversations, with these people and others, is that they don’t expect/believe they could be motivated to work on this problem. (e.g. due to it feeling more abstract, less visceral than other plausible priorities.)
Maybe you didn’t cover these because they’re relevant to much work on x-risks, rather than AI safety specifically?
I think this is a good point. The goal is maximising the expected value of the future, not minimising the probability of the worst outcome.
Pardon me if this is an obvious reference around here, but what is the source for the “much higher than 50%” risk? My prior is that such percentages are too high to be taken seriously as a rational prediction, but precisely for that reason I’d be interested in challenging and updating.
Last I heard Nate Soares (at MIRI) has an all-things-considered probability around 80%, and Evan Hubinger recently gave ~80% too. Nate’s reasoning is here, and he would probably also endorse this list of challenges.
I think you don’t really have to have any crazy beliefs to have probabilities above 50%, just
higher confidence in the core arguments being correct, such that you think there are concrete problems that probably need to be solved to avoid AI takeover
a prior that is not overwhelmingly low, despite some previous mechanisms for catastrophe like overpopulation and nuclear war being avoidable. The world is allowed to kill you.
observation that not much progress has been made on the problem so far, and belief that this will not massively speed up as we get closer to AGI
Believing there are multiple independent core problems we don’t have traction on, or that some problems are likely to take serial time or multiple attempts that we don’t have, can drive this probability higher.
Adding Nate Soares’s “AGI ruin scenarios are likely [...]”
See e.g. Yudkowsky’s AGI Ruin: A List of Lethalities. I think at this point Yudkowsky is far from alone in giving it >50% probability, though I expect that view is far less common in academia and among machine learning (capabilities) researchers.
Thank you for this great overview! I might have missed it but is there a link to work being done/needed to be done on how to help people to adapt/reskill to upcoming AI development. Similar to the reskilling need linked to the need for greener jobs. I can imagine that a real focus and opportunity lies here. To ensure that people who will see their current jobs/field being widely impacts by AI have the guidance and support to move towards a career that increases their sense of purpose and contribution vs lead them to a sense of loss of meaning and/or exclusion.
Thank you! Alix.