Credulous really is the right word. There is a strand of dialogue in EA circles that feels like “we called much of this many years ago” therefor “everything that transpires will mimic our thought experiments perfectly.” The marketing from frontier labs is the offspring of early EA/LW ideas. The potential for confirmation bias here is astronomical.
We should expect to get constantly nerdsniped by frontier labs. And we have. Most EAs I talk to think Claude Code has made (or nearly made) software engineering a closed loop or RSI. They see the METR graph as a direct line pointing to AGI. They see AI 2027 as a principled, ballpark estimate for encroaching doom.
More skepticism and more posts like this seem incredibly important.
Credulous really is the right word. There is a strand of dialogue in EA circles that feels like “we called much of this many years ago” therefor “everything that transpires will mimic our thought experiments perfectly.” The marketing from frontier labs is the offspring of early EA/LW ideas. The potential for confirmation bias here is astronomical.
I also think that when it comes to assessing whether they’re overly trusting of the claims of frontier labs because it fits their broader views, it’s probably more relevant that EAs generally believed Altman and Musk when they said they were founding OpenAI to do philanthropic research when basically everybody else understood what they were really trying to do than that EAs correctly called transformers being a big deal when the average computer scientist was a bit more cautious.
GPT2 was “too dangerous to release” as a marketing strategy too.
Credulous really is the right word. There is a strand of dialogue in EA circles that feels like “we called much of this many years ago” therefor “everything that transpires will mimic our thought experiments perfectly.” The marketing from frontier labs is the offspring of early EA/LW ideas. The potential for confirmation bias here is astronomical.
We should expect to get constantly nerdsniped by frontier labs. And we have. Most EAs I talk to think Claude Code has made (or nearly made) software engineering a closed loop or RSI. They see the METR graph as a direct line pointing to AGI. They see AI 2027 as a principled, ballpark estimate for encroaching doom.
More skepticism and more posts like this seem incredibly important.
I also think that when it comes to assessing whether they’re overly trusting of the claims of frontier labs because it fits their broader views, it’s probably more relevant that EAs generally believed Altman and Musk when they said they were founding OpenAI to do philanthropic research when basically everybody else understood what they were really trying to do than that EAs correctly called transformers being a big deal when the average computer scientist was a bit more cautious.
GPT2 was “too dangerous to release” as a marketing strategy too.