Programme Director at ML4Good. I also do some work on China policy, AI governance, and animal advocacy in Asia.
Also interested in effective giving (mainly animal charities), economic development (and how AI will affect it), AI x Animals, wild animal welfare, cause prioritisation, and various meta-EA topics.
I agree with the âyou donât have to debate on their termsâ point here, but I think for 99% of your readers/âlisteners, it cuts far more strongly in a different way than that youâre implying.
The debate has generally been set in terms of âAnthropic vs. DoWâ, and, while I know zero people in our community who have taken the governmentâs side on this, Iâve seen many EAs and adjacent people become increasingly uncritical supporters of Anthropic, just because theyâre standing against the obviously bad actor in this situation.
I think itâs important to remember:
If you thought Anthropic was untrustworthy before this, you shouldnât update too much the other wayâespecially when they backtracked on their RSP over the same period.
If you thought that Anthropicâs decision to join the race towards AGI was perilous, you shouldnât really update your view on this based on the Pentagon being absurd and unpredictable.
Regardless of the intention and character of the government actors, itâs potentially still a worrying sign that the most powerful state in the world has tried to shut down a frontier AI lab and failed spectacularly.
The growth and popularity of Anthropic and Claude Code have since caused the AI 2027 team to shorten their AGI timelines.