Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable thing to be doing.
I think that “I am going to take a job at specifically OpenAI or DeepMind for the purposes of building career capital or having a positive influence on their safety outlook, while directly building the exact thing that we very much do not want to be built, or we want to be built as slowly as possible because it is the thing causing the existential risk” is very clearly the thing to not do. There are all of the things in the world you could be doing.There is a very, very narrow — hundreds of people, maybe low thousands of people — who are directly working to advance the frontiers of AI capabilities in the ways that are actively dangerous. Do not be one of those people. Those people are doing a bad thing. I do not like that they are doing this thing.
And it doesn’t mean they’re bad people. They have different models of the world, presumably, and they have a reason to think this is a good thing. But if you share anything like my model of the importance of existential risk and the dangers that AI poses as an existential risk, and how bad it would be if this was developed relatively quickly, I think this position is just indefensible and insane, and that it reflects a systematic error that we need to snap out of. If you need to get experience working with AI, there are indeed plenty of places where you can work with AI in ways that are not pushing this frontier forward.
The transcript is from the 80k website. The episode is also linked to in the post. It also continues to Rob replying that the 80k view is “it’s complicated” and Zvi replying to that.
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing “capabilities” (as the topic is often divided).
My main point is that I recommend checking the specific project you’d work on, and not only what it’s branded as, if you think advancing AI capabilities could be dangerous (which I do think).
I personally think that “does this advance capabilities” is the wrong question to ask, and instead you should ask “how much does this advance capabilities relative to safety”. Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it’s actually doing anything real. I struggle to think of any work I think is both useful and doesn’t advance capabilities at all
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I’m not missing our crux completely: Do you agree:
AI has a non negligable chance of being an existential problem
Labs advancing capabilities are the main thing causing that
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as “the main” thing, and there’s a bunch of complex reasoning about counterfactuals—eg if GDM stopped work that wouldn’t stop Meta, so is GDM working on capabilities actually the main thing?
I’m pretty unconvinced that not sharing results with frontier labs is tenable—leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me
Zvi on the 80k podcast:
The transcript is from the 80k website. The episode is also linked to in the post. It also continues to Rob replying that the 80k view is “it’s complicated” and Zvi replying to that.
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing “capabilities” (as the topic is often divided).
My main point is that I recommend checking the specific project you’d work on, and not only what it’s branded as, if you think advancing AI capabilities could be dangerous (which I do think).
I personally think that “does this advance capabilities” is the wrong question to ask, and instead you should ask “how much does this advance capabilities relative to safety”. Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it’s actually doing anything real. I struggle to think of any work I think is both useful and doesn’t advance capabilities at all
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I’m not missing our crux completely: Do you agree:
AI has a non negligable chance of being an existential problem
Labs advancing capabilities are the main thing causing that
))
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as “the main” thing, and there’s a bunch of complex reasoning about counterfactuals—eg if GDM stopped work that wouldn’t stop Meta, so is GDM working on capabilities actually the main thing?
I’m pretty unconvinced that not sharing results with frontier labs is tenable—leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me