However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. Weāre not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think itās important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts weāve spoken to expect that most non-safety projects arenāt likely to be causing harm.
I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like itās making the counterintuitive claim that āsafetyā work is actually where much of the danger lies. Is that intended?
Thatās not the intention, thanks for pointing this out!
To clarify, by ārouteā, I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, itās important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).
I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like itās making the counterintuitive claim that āsafetyā work is actually where much of the danger lies. Is that intended?
Thatās not the intention, thanks for pointing this out!
To clarify, by ārouteā, I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, itās important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).