I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing “capabilities” (as the topic is often divided).
My main point is that I recommend checking the specific project you’d work on, and not only what it’s branded as, if you think advancing AI capabilities could be dangerous (which I do think).
I personally think that “does this advance capabilities” is the wrong question to ask, and instead you should ask “how much does this advance capabilities relative to safety”. Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it’s actually doing anything real. I struggle to think of any work I think is both useful and doesn’t advance capabilities at all
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I’m not missing our crux completely: Do you agree:
AI has a non negligable chance of being an existential problem
Labs advancing capabilities are the main thing causing that
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as “the main” thing, and there’s a bunch of complex reasoning about counterfactuals—eg if GDM stopped work that wouldn’t stop Meta, so is GDM working on capabilities actually the main thing?
I’m pretty unconvinced that not sharing results with frontier labs is tenable—leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing “capabilities” (as the topic is often divided).
My main point is that I recommend checking the specific project you’d work on, and not only what it’s branded as, if you think advancing AI capabilities could be dangerous (which I do think).
I personally think that “does this advance capabilities” is the wrong question to ask, and instead you should ask “how much does this advance capabilities relative to safety”. Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it’s actually doing anything real. I struggle to think of any work I think is both useful and doesn’t advance capabilities at all
My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.
((
making sure I’m not missing our crux completely: Do you agree:
AI has a non negligable chance of being an existential problem
Labs advancing capabilities are the main thing causing that
))
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as “the main” thing, and there’s a bunch of complex reasoning about counterfactuals—eg if GDM stopped work that wouldn’t stop Meta, so is GDM working on capabilities actually the main thing?
I’m pretty unconvinced that not sharing results with frontier labs is tenable—leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me