Benjamin was a research analyst at 80,000 Hours. Before joining 80,000 Hours, he worked for the UK Government and did some economics and physics research.
Benjamin Hilton
Factory farming as a pressing world problem
Nuclear weapons – Problem profile
Thanks Vasco! I’m working on a longer article on exactly this question (how pressing is nuclear risk). I’m not quite sure what I’ll end up concluding yet, but your work is a really helpful input.
Nuclear weapons safety and security—Career review
Totally agree! Indeed, there’s a classic 80k article about this.
When working out your next steps, we tend to recommend working forwards from what you know, and working backwards from where you might want to end up (see our article on finding your next career steps). We also think people should explore more with their careers (see our article on career exploration).If there are areas where we’re giving the opposite message, I’d love to know – shoot me an email or DM?
80,000 Hours’ new series on building skills
Hi Remmelt,
Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.
Ultimately, we disagree with you that it’s net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.
We argue for this position extensively in my article on the topic (and we only list roles consistent with the considerations in that article).
Some other things we’ve published on this topic in the last year or so:
We recently released a podcast episode with Nathan Labenz on some of the controversy around OpenAI, including his concerns about some of their past safety practices, whether ChatGPT’s release was good or bad, and why its mission of developing AGI may be too risky.
Benjamin
Most of our advice on actually having an impact — rather than building career capital — is highly relevant to mid-career professionals. That’s because they’re entering their third career stage (https://80000hours.org/career-guide/career-planning/#three-career-stages), i.e. actually trying to have an impact. When you’re mid-career, it’s much more important to appropriately:
Pick a problem
Find a cost-effective way of solving that problem that fits your skills
Avoid doing harm
So we hope mid-career people can get a lot out of reading our articles. I’d probably in particular suggest reading our advanced series (https://80000hours.org/advanced-series/).
Announcing the new 80,000 Hours Career Guide, by Benjamin Todd
By “engagement time” I mean exactly “time spent on the website”.
Thanks for this comment Tyler!
To clarify what I mean by unknown unknowns, here’s a climate-related example: We’re uncertain about the strength of various feedback loops, like how much warming could be produced by cloud feedbacks. We’d then classify “cloud feedbacks” as a known unknown. But we’re also uncertain about whether there are feedback loops we haven’t identified. Since we don’t know what these might be, these loops are unknown unknowns. As you say, the known feedback loops don’t seem likely to warm earth enough to cause a complete destruction of civilisation, which means that if climate change were to lead to civilisational collapse, that would probably be because of something we failed to consider.
But here’s the thing: generally we do know something about unknown unknowns.[1] In the case of these unknown feedback loops, we can place some constraints on them. For example:
They couldn’t cool the Earth past absolute zero, because that’s pretty much impossible.[2]
They almost certainly couldn’t make the earth hotter than the Sun (because at some point the Earth would start forming a fusing ball of plasma, and the Earth isn’t heavy enough to be hotter than the sun if it turned into a star).
In fact, we can gather a broad variety of evidence about these unknown unknowns, using various different lines of evidence. These lines of evidence include:
The physics constraining possible feedback processes
The historical climate record (since 1800)
The paleoclimate record (millions of years into the past)
Accounting for these multiple lines of evidence is exactly what the 6th Assessment Report attempts to do when calculating climate sensitivity (how much Earth’s surface will cool or warm after a specified factor causes a change in its climate system):[3]
In AR6 [the 6th Assessment report], the assessments of ECS [equilibrium climate sensitivity] and TCR [transient climate response] are made based on multiple lines of evidence, with ESMs [earth system models] representing only one of several sources of information. The constraints on these climate metrics are based on radiative forcing and climate feedbacks assessed from process understanding (Section 7.5.1), climate change and variability seen within the instrumental record (Section 7.5.2), paleoclimate evidence (Section 7.5.3), emergent constraints (Section 7.5.4), and a synthesis of all lines of evidence (Section 7.5.5). In AR5 [the 5th assessment report], these lines of evidence were not explicitly combined in the assessment of climate sensitivity, but as demonstrated by Sherwood et al. (2020) their combination narrows the uncertainty ranges of ECS compared to that assessed in AR5.
That is, as I mentioned in the main post “the IPCC’s Sixth Assessment Report… attempts to account for structural uncertainty and unknown unknowns. Roughly, they find it’s unlikely that all the various lines of evidence are biased in just one direction — for every consideration that could increase warming, there are also considerations that could decrease it.”
As a result, even when accounting for unknown unknowns, it looks extremely unlikely that anthropogenic warming could heat the earth enough to cause complete civilisational collapse (for a discussion of how hot that would need to be, see the first section of the main post!).
If you’re interested in diving into this further, I’d suggest taking a look at the original paper “An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence” by Sherwood et al., or Why low-end ‘climate sensitivity’ can now be ruled out, a popular summary by the paper’s authors.
- ^
It’s of course true that there are some kinds of unknown unknowns that are impossible to account for — that is, things about which we have no information. But these are rarely particularly important unknown unknowns, in part because of that lack of information: in order to have no information about something, we necessarily can’t have any evidence for its existence, so from the perspective of Occam’s razor, they’re inherently unlikely.
- ^
At least, in macroscopic systems. You can have negative absolute temperatures in systems with a population inversion (like a laser while it’s lasing), although these systems are generally considered thermodynamically hotter than positive-temperature systems (because heat flows from the negative temperature system to the positive temperature system).
- ^
From the introduction to section 7.5 of the Working Group I contribution to the Sixth Assessment Report (p.993).
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!
Thanks, this is an interesting heuristic, but I think I don’t find it as valuable as you do.
First, while I do think it’d probably be harmful in expectation to work at leading oil companies / at the Manhattan project, I’m not confident in that view — I just haven’t thought about this very much.
Second, I think that AI labs are in a pretty different reference class from oil companies and the development of nuclear weapons.
Why? Roughly:
Whether, in a broad sense, capabilities advances are good or bad is pretty unclear. (Note some capabilities advances in particular areas are very clearly harmful.) In comparison, I do think that, in a broad sense, the development of nuclear weapons, and the release of greenhouse gases are harmful.
Unlike with oil companies and the Manhattan Project, I think that there’s a good chance that a leading, careful AI project could be a huge force for good, substantially reducing existential risk — and so it seems weird not to consider working at what could be one of the world’s most (positively) impactful organisations. Of course, you should also consider the chance that the organisation could be one of the world’s most negatively impactful organisations.
Because these issues are difficult and we don’t think we have all the answers, I also published a range of opinions about a related question in our anonymous advice series. Some of the respondents took a very sceptical view of any work that advances capabilities, but others disagreed.
Hi Yonatan,
I think that for many people (but not everyone) and for many roles they might work in (but not all roles), this is a reasonable plan.
Most importantly, I think it’s true that working at a top AI lab as an engineer is one of the best ways to build technical skills (see the section above on “it’s often excellent career capital”).
I’m more sceptical about the ability to push towards safe decisions (see the section above on “you may be able to help labs reduce risks”).
The right answer here depends a lot on the specific role. I think it’s important to remember than not all AI capabilities work is necessarily harmful (see the section above on “you might advance AI capabilities, which could be (really) harmful”), and that top AI labs could be some of the most positive-impact organisations in the world (see the section above on “labs could be a huge force for good—or harm”). On the other hand, there are roles that seem harmful to me (see “how can you mitigate the downsides of this option”).
I’m not sure of the relevance of “having a good understanding of how to do alignment” to your question. I’d guess that lots of knowing “how to do alignment” is being very good at ML engineering or ML research in general, and that working at a top AI lab is one of the best ways to learn those skills.
Should you work at a leading AI lab? (including in non-safety roles)
AI safety technical research—Career review
The Portuguese version at 80000horas.com.br is a project of Altruísmo Eficaz Brasil. We often give people permission to translate our content when they ask—but as to when, that would be up to Altruísmo Eficaz Brasil! Sorry I can’t give you a more concrete answer.
Give feedback on the new 80,000 Hours career guide
(Personal views, not representing 80k)
My basic answer is “yes”.
Longer version:
I think this depends what you mean.
By “longtermism”, I mean the idea that improving the long-run future is a key moral priority. By “longtermist” I mean someone who personally identifies with belief in longtermism.
I think x-risks are the most pressing problems from a cause-neutral perspective (although I’m not confident about this, there are a number of plausible alternatives, including factory farming).
I think longtermism is also (approximately) true from a cause neutral perspective (I’m also not confident about this).
The implication between these two beliefs could go either way, depending on how you structure the argument. You could first argue that x-risks are pressing, which in turn implies that protecting the long-run future is a priority. Or you could argue the other way, that improving the long-run future is important and reducing x-risks are a tractable way of doing so.
Most importantly though, I think you can believe that x-risks are the most pressing issue, and indeed believe that improving the long-run future is a key moral priority of our time, without identifying as a “longtermist”.
Indeed, I think that there’s sufficient objectivity in the normative claims underlying the pressing-ness of x-risks that, according to my current meta-ethical and empirical beliefs, I just believe it’s true that x-risks are the most pressing problems (again, I’m not hugely confident in this claim). The truth of this statement is independent of the identity of the actor, hence my answer “yes”.
Caveat:
If, by your question, you mean “Do you think working on x-risks is the best thing to do for non-longtermists?” the answer is “sometimes, but often no”. This is because a problem being pressing on average doesn’t imply that all work on that problem is equally valuable: personal fit, and the choice of intervention both play an important role. I’d guess that it would be best for someone with lots of experience working on a particularly cost-effective animal welfare intervention to work on that intervention rather than move into x-risks.
Thank you so much for spotting this! It seems like both your points are correct.
To explain where these mistakes came from:
I think visceral gout can be caused by infectious disease, but it can also be caused by other factors, such as poor nutrition (see, e.g. this post from a hen breeding company), so it’s not correct to classify as an infectious disease. The referenced article in footnote 13 investigated the frequency of various diseases in chicken farms in Bangladesh, and found that visceral gout was the most common identified disease (but they correctly do not say it was the most common identified infectious disease).
Vomiting is a symptom of coccidiosis in other animals, (e.g. dogs) but as you say, not in chickens; chickens cannot vomit. I must have looked up the symptoms independently from the frequency data.
I couldn’t find any previous collation of evidence about how animals are treated that I trusted, so this took a lot of research. (Most claims on this topic on the internet are unreferenced claims on websites with a clear agenda: either animal advocacy or the farming industry.) As a result, I don’t doubt there are further mistakes in that section, but hopefully none that detract from the underlying point.
(I no longer work at 80,000 Hours but I’ll ask them to fix this on the website.)