Thanks for exploring this issue! I agree that there could be more understanding between AI safety & the wider AI community, and I’m curious to do more thinking about this.
I think each of the 3 claims you make in the body of the text are broadly true. However I don’t think they directly back up the claim in the title that “AI safety is not separate from near-term applications”.
I think there are some important ways that AI safety is distinct; it goes 1 step further by imagining the capabilities of future systems, and trying to anticipate ways they could go wrong ahead of time. I think there are some research questions it’d be hard to work on if the AI safety field wasn’t separate from current-day application research. E.g. agent foundations, inner misalignment and detecting deception.
I think I agree with much of your sentiment still. To illustrate what I mean, I would like it to be true that:
Important AI current-day-application safety issues are worked on by many people, and there is mutual respect between our communities
Work done by near-term application researchers is known about and leverageable by the AGI safety community
Ultimately, there is still a distinct, accessible AGI safety community that works on issues distinct to advanced, general AI systems
No disagreements here. I guess I imagine AIS&L work along with work on the neartermist examples I mentioned as a venn diagram with healthy overlap. I’m glad for the AIS&L community, and I think it tackles some truly unique problems. By “separate” I essentially meant “disjoint” in the title.
Thanks for exploring this issue! I agree that there could be more understanding between AI safety & the wider AI community, and I’m curious to do more thinking about this.
I think each of the 3 claims you make in the body of the text are broadly true. However I don’t think they directly back up the claim in the title that “AI safety is not separate from near-term applications”.
I think there are some important ways that AI safety is distinct; it goes 1 step further by imagining the capabilities of future systems, and trying to anticipate ways they could go wrong ahead of time. I think there are some research questions it’d be hard to work on if the AI safety field wasn’t separate from current-day application research. E.g. agent foundations, inner misalignment and detecting deception.
I think I agree with much of your sentiment still. To illustrate what I mean, I would like it to be true that:
Important AI current-day-application safety issues are worked on by many people, and there is mutual respect between our communities
Work done by near-term application researchers is known about and leverageable by the AGI safety community
Ultimately, there is still a distinct, accessible AGI safety community that works on issues distinct to advanced, general AI systems
No disagreements here. I guess I imagine AIS&L work along with work on the neartermist examples I mentioned as a venn diagram with healthy overlap. I’m glad for the AIS&L community, and I think it tackles some truly unique problems. By “separate” I essentially meant “disjoint” in the title.