However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm.
I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like it’s making the counterintuitive claim that “safety” work is actually where much of the danger lies. Is that intended?
That’s not the intention, thanks for pointing this out!
To clarify, by “route”, I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it’s important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).
I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like it’s making the counterintuitive claim that “safety” work is actually where much of the danger lies. Is that intended?
That’s not the intention, thanks for pointing this out!
To clarify, by “route”, I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it’s important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).