I’d like to add that the stakes are high enough to justify pushing resources into every angle we might reasonably have on the problem. Even if foundational research has only a sliver of a chance of impacting future alignment, that sliver contains quite a lot of value. And I do think it’s in fact quite a bit more than a sliver.
My only caveat is that lots of work that is supposed to “help” with reducing existential AI risk is net-negative, due to accelerating capabilities, creating race dynamics, enabling dangerous misuse, etc. But it seems much less likely to be a risk for the type of work described in the post.
I enjoyed this post. Short and to the point.
I’d like to add that the stakes are high enough to justify pushing resources into every angle we might reasonably have on the problem. Even if foundational research has only a sliver of a chance of impacting future alignment, that sliver contains quite a lot of value. And I do think it’s in fact quite a bit more than a sliver.
My only caveat is that lots of work that is supposed to “help” with reducing existential AI risk is net-negative, due to accelerating capabilities, creating race dynamics, enabling dangerous misuse, etc. But it seems much less likely to be a risk for the type of work described in the post.