Feels like Anthropic has been putting out a lot of good papers recently that help build the case for various AI threats. Given this, “no meaningful path to impact” seem a bit strong.
What happens because of these papers? Do they influence Anthropic to stop developing powerful AI? Evidently not.
Feels like Anthropic has been putting out a lot of good papers recently that help build the case for various AI threats. Given this, “no meaningful path to impact” seem a bit strong.
What happens because of these papers? Do they influence Anthropic to stop developing powerful AI? Evidently not.