My only caveat is that lots of work that is supposed to “help” with reducing existential AI risk is net-negative, due to accelerating capabilities, creating race dynamics, enabling dangerous misuse, etc. But it seems much less likely to be a risk for the type of work described in the post.
My only caveat is that lots of work that is supposed to “help” with reducing existential AI risk is net-negative, due to accelerating capabilities, creating race dynamics, enabling dangerous misuse, etc. But it seems much less likely to be a risk for the type of work described in the post.