Thinking this through: what’s novel is not so much the idea that the path AI takes affects non-human welfare, but that it’s worth developing this as its own subfield.
And the argument for this is much stronger in the current context: the arguments for rapid AI progress, AI companies not being responsible by default and AI not being aligned by default are much more legible these days.
And that makes it much easier to build energy around this as there seem to be folks in the EA animal welfare crowd who were skeptical about AI/AI risk before, but now see that this seems like it is going to be a big deal. Compared to standard AI alignment/governance, the explicit inclusion of animals makes it resonate more with their current interests, in addition being an area where their existing skills/knowledge are likely to be more applicable.
So I suspect what matters is not just having the idea, but deciding to promote the idea in the right context.
Thinking this through: what’s novel is not so much the idea that the path AI takes affects non-human welfare, but that it’s worth developing this as its own subfield.
And the argument for this is much stronger in the current context: the arguments for rapid AI progress, AI companies not being responsible by default and AI not being aligned by default are much more legible these days.
And that makes it much easier to build energy around this as there seem to be folks in the EA animal welfare crowd who were skeptical about AI/AI risk before, but now see that this seems like it is going to be a big deal. Compared to standard AI alignment/governance, the explicit inclusion of animals makes it resonate more with their current interests, in addition being an area where their existing skills/knowledge are likely to be more applicable.
So I suspect what matters is not just having the idea, but deciding to promote the idea in the right context.