Given that you are criticising the epistemics of EAs taking AGI very seriously, I think it’s reasonable to hold this post to a higher epistemic standard than a typical EA forum post. Apologies if this comes across as combative—I spent some time trying to tone it down with Claude and struggled to get something that wasn’t just hedged/weak sauce. I am excited about more discussion of the capabilities of AI systems on the EA forum and would like more people to write up their takes on the current situation.
…...
I think you are applying more rigour to the bullish case than the bearish one. For example, you say:
[Mythos not providing a substantive improvement in cybersec capabilities] is further highlighted by the fact that an independent analysis was able to find many of the same vulnerabilities using much smaller open-source models.
I think this is misleading for a few reasons:
AISLE is not an “independent” entity—their whole business depends on Mythos and frontier models not being as big a deal as harnesses
That analysis does not “find” many of the same vulns—they were presented to the LLMs selectively
They don’t give a false positive rate, so it’s not clear that the LLMs classifications have much validity
On the claim that Anthropic talks about risks from their own models primarily to create hype: I find this hard to square with the evidence. Talking about how your B2B product might be extremely dangerous, or publishing lengthy documents critically assessing your own product and admitting to errors that would be difficult to identify independently (e.g. accidentally training against the CoT), is not a common marketing tactic. It feels like your model implies that companies should only release materials optimised for short-term interests, which doesn’t predict the real differences in how AI companies approach releases.
Benchmarks are interpreted uncritically
The benchmark contamination arguments are worth engaging with in principle, but I’m not sure they’re doing much work in practice—I don’t think many people in EA are actually updating heavily on raw benchmark scores right now. METR, arguably EA’s favourite benchmarking org, has been pretty vocal about their own benchmarks being saturated, so I think the community is reasonably aware of these limitations already.
Negative results are ignored
I’m genuinely uncertain what you want Anthropic and other AI companies to do here. Do you think “genuine intelligence” is easy to measure and well-defined? The more concrete concepts being used as proxies—coding ability, economic value generated, uplift—seem defensible on their own terms rather than as misleading substitutes for something more fundamental.
On “fundamental limits of LLMs” more broadly: these arguments have been made confidently by prominent researchers since the advent of LLMs and have not had a great track record. That doesn’t make them wrong, but it’s worth noting.
.....
I think this post would be much stronger if it applied its standards more symmetrically. It would also help to have a more concrete conclusion. The current takeaway is essentially “further research is needed”, which is a claim you can make about most areas of research (so much so that it’s been banned from multiple journals), but I don’t have a great sense of what research would actually convince you that the “AI hype” is reasonable.
Given that you are criticising the epistemics of EAs taking AGI very seriously, I think it’s reasonable to hold this post to a higher epistemic standard than a typical EA forum post. Apologies if this comes across as combative—I spent some time trying to tone it down with Claude and struggled to get something that wasn’t just hedged/weak sauce. I am excited about more discussion of the capabilities of AI systems on the EA forum and would like more people to write up their takes on the current situation.
…...
I think you are applying more rigour to the bullish case than the bearish one. For example, you say:
I think this is misleading for a few reasons:
AISLE is not an “independent” entity—their whole business depends on Mythos and frontier models not being as big a deal as harnesses
That analysis does not “find” many of the same vulns—they were presented to the LLMs selectively
They don’t give a false positive rate, so it’s not clear that the LLMs classifications have much validity
On the claim that Anthropic talks about risks from their own models primarily to create hype: I find this hard to square with the evidence. Talking about how your B2B product might be extremely dangerous, or publishing lengthy documents critically assessing your own product and admitting to errors that would be difficult to identify independently (e.g. accidentally training against the CoT), is not a common marketing tactic. It feels like your model implies that companies should only release materials optimised for short-term interests, which doesn’t predict the real differences in how AI companies approach releases.
Benchmarks are interpreted uncritically
The benchmark contamination arguments are worth engaging with in principle, but I’m not sure they’re doing much work in practice—I don’t think many people in EA are actually updating heavily on raw benchmark scores right now. METR, arguably EA’s favourite benchmarking org, has been pretty vocal about their own benchmarks being saturated, so I think the community is reasonably aware of these limitations already.
Negative results are ignored
I’m genuinely uncertain what you want Anthropic and other AI companies to do here. Do you think “genuine intelligence” is easy to measure and well-defined? The more concrete concepts being used as proxies—coding ability, economic value generated, uplift—seem defensible on their own terms rather than as misleading substitutes for something more fundamental.
On “fundamental limits of LLMs” more broadly: these arguments have been made confidently by prominent researchers since the advent of LLMs and have not had a great track record. That doesn’t make them wrong, but it’s worth noting.
.....
I think this post would be much stronger if it applied its standards more symmetrically. It would also help to have a more concrete conclusion. The current takeaway is essentially “further research is needed”, which is a claim you can make about most areas of research (so much so that it’s been banned from multiple journals), but I don’t have a great sense of what research would actually convince you that the “AI hype” is reasonable.