I’m Jeffrey Ladish. I’m a security researcher and risk consultant focused on global catastrophic threats. My website is at https://jeffreyladish.com
Jeffrey Ladish
Information security considerations for AI and the long term future
Donation offsets for ChatGPT Plus subscriptions
Nuclear war is unlikely to cause human extinction
Update on civilizational collapse research
US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk
Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments
EA Hangout Prisoners’ Dilemma
My vision of a good future, part I
Does the US nuclear policy still target cities?
I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.
I think it’s worth noting that that I’d expect you would gain a significant relative advantage if you get out of cities before other people, such that acting later would be a lot less effective at furthering your survival & rebuilding goals.
I expect the bulk of the risk of an all out nuclear war to happen in the couple of weeks after the first nuclear use. If I’m right, then the way to avoid the failure mode you’re identifying is returning in a few weeks if no new nuclear weapons have been used, or similar.
When you plan according to your AI timelines, should you put more weight on the median future, or the median future | eventual AI alignment success? ⚖️
Really appreciate you! It’s felt stressful sometimes as just someone in the community and it’s hard to imagine how stressful it would feel for me in your shoes. Really appreciate your hard work, and I think the EA movement is significantly improved through your hard work maintaining and improving and moderating the forum, and all the mostly-unseen-but-important work mitigating conflicts & potential harm in the community.
Thoughts on the OpenAI alignment plan: will AI research assistants be net-positive for AI existential risk?
An additional point is that “relevant roles in government” should probably mean contracting work as well. So it’s possible to go work for Raytheon, get a security clearance, and do cybersecurity work for government (and that pays significantly better!)
Could you define ESG investing at the begining of your post?
I think I gave the impression that I’m making a more expansive claim than I actually mean to make, and will edit the post to clarify this. The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who’ve examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summary of this argument. I think that’s just the beginning in assessing existential risk from nuclear war, and I really wouldn’t want people to read my post and walk away thinking “nuclear war is nothing to worry about from a longtermist perspective.”
I agree that “We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war”. I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction. I currently think humans would retain the most significant basic survival technologies following a collapse and then reacquire lost technological capacities relatively quickly. (I discussed this investigation here though not in depth). I’m planning too write this up as part of my compounding risks post or as a separate one.
Agreed that it’s very hard to know the sign on a huge history-altering event, whether it’s a nuclear war or covid.
I think the problem is that the vagueness of the type of commitment the GWWC represents. If it’s an ironclad commitment, people should lose a lot of trust in you. If it was a “best of intention” type commitment, people should only lose a modest amount of trust in you. I think the difference matters!
I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.
My conclusion after investigating potential collapse scenarios:
1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a “civilizational collapse”, where an unprecedented number of people die and key technologies are (temporarily) lost.
2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.
3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.
Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn’t have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.
This is a big area of uncertainty for me. I agree that Google & other top companies would be quite valuable, but I’m much less convinced that government work will be as good. At high levels of the NSA, CIA, military intelligence, etc. I expect it be, but for someone getting early experience, it’s less obvious. Government positions are probably going to be less flexible / more constrained in the types of problems to work on and have less quality mentorship opportunities at the lower levels. Startups can be good if they startups value security (Reserve was great for me because I got to actually be in charge of security for the whole company & learn how to get people to use good practices), but most startups do not value security, so I wouldn’t recommend working for a startup unless they showed strong signs of valuing security.
My guess is that the important factors are roughly:
Good technical mentorship—While I expect this to be better than average at the big tech companies, it isn’t guaranteed.
Experience responding to real threats (i.e., a company that has enough attack surface and active threats to get a good sense of what real attacks look like)
Red team experience, as there is no substitute for actually learning how to attack a system
Working with non-security & non-technical people to implement security controls. I think most of the opportunities described in this post will require this kind of experience. Some technical security roles in big companies do not require this, since there is enough specialization that vulnerability remediation can happen via other companies.