Note—this was written kinda quickly, so might be a bit less tactful than I would write if I had more time.
Making a quick reply here after binge listening to three Epoch-related podcasts in the last week, and I basically think my original perspective was vindicated. It was kinda interesting to see which points were repeated or phrased a different way—would recommend if your interested in the topic.
The initial podcast with Jaime, Ege, and Tamay. This clearly positions the Epoch brain trust as between traditional academia and the AI Safety community (AISC). tl;dr—academia has good models but doesn’t take ai seriously, and AISC the opposite (from Epoch’s PoV)
The ‘debate’ between Matthew and Ege. This should have clued people in, because while full of good content, by the last hour/hour and half it almost seemed to turn into ‘openly mocking and laughing’ at AISC, or at least the traditional arguments. I also don’t buy those arguments, but I feel like the reaction Matthew/Ege have shows that they just don’t buy the root AISC claims.
The recent podcast Dwarkesh with Ege & Tamay. This is the best of the 3, but probably also best listened too after the first too, since Dwarkesh actually pushes back on quite a few claims, which means Ege & Tamay flush out their views more—personal highlight was what the reference class for AI Takeover actually means.
Basically, the Mechanize cofounders don’t agree at all with ‘AI Safety Classic’, I am very confident that they don’t buy the arguments at all, that they don’t identify with the community, and somewhat confident that they don’t respect the community or its intellectual output that much.
Given that their views are: a) AI will be a big deal soon (~a few decades), b) returns to AI will be very large, c) Alignment concerns/AI risks are overrated, and d) Other people/institutions aren’t on the ball, then starting an AI Start-up seems to make sense.
What is interesting to note, and one I might look into in the future, is just how much these differences in expectation of AI depend on differences in worldview, rather than differences in technical understanding of ML or understanding of how the systems work on a technical level.
So why are people upset?
Maybe they thought the Epoch people were more part of the AISC than they actually were? Seems like the fault of the people believe this, not Epoch or the Mechanize founders.
Maybe people are upset that Epoch was funded by OpenPhil, and this seems to have lead to ‘AI acceleration’? I think that’s plausible, but Epoch has still produced high-quality reports and information, which OP presumably wanted them to do. But I don’t think equating EA == OP, or anyone funded by OP, is a useful concept to me.
Maybe people are upset at any progress in AI capabilities. But that assumes that Mechanize will be successful in its aims, not guaranteed. It also seems to reify the concept of ‘capabilities’ as one big thing which i don’t think makes sense. Making a better Stockfish, or a better AI for FromSoft bosses does not increase x-risk, for instance.
Maybe people think that the AI Safety Classic arguments are just correct and therefore people taking actions other than it. But then many actions seem bad by this criteria all the time, so odd this would provoke such a reaction. I also don’t think EA should hang its hat on ‘AI Safety Classic’ arguments being correct anyway.
Probably some mix of it. I personally remain not that upset because a) I didn’t really class Epoch as ‘part of the community’, b) I’m not really sure I’m ‘part of the community’ either and c) my views are at least somewhat similar to the Epoch set above, though maybe not as far in their direction, so I’m not as concerned object-level either.
Even assuming OP funding != EA, one still might consider OP funding to count as funding from the AI Safety Club (TM), and for the Mechanize critics to be speaking in their capacity as members of the AISC rather than of EA. Being upset that AISC money supported development of people who are now working to accelerate AI seems understandable to me.
Epoch fundraised on the Forum in early 2023 and solicited applications for employment on the Forum as recently as December 2024. Although I don’t see any specific references to the AISC in those posts, it wouldn’t be unreasonable to assume some degree of alignment from its posting of fundraising and recruitment asks on the Forum without any disclaimer. (However, I haven’t heard a good reason to impute Epoch’s actions to the Mechanize trio specifically.)
My own take on AI Safety Classic arguments is I’ve become convinced by o3/Sonnet 3.7 that the alignment is very easy hypothesis is looking a lot shakier than it used to be, and I suspect future capabilities progress is likely to be at best neutral, and probably worse for alignment being very easy.
I do think you can still remain optimistic based on other cases, but a pretty core crux is I think alignment does need to be solved if AIs are able to automate the economy, and this is pretty robust to variations on what happens with AI.
The big reason for this is that once your labor is valueless, but your land/capital isn’t, you have fundamentally knocked out a load-bearing pillar of the argument that expropriation is less useful than trade.
This is to a first approximation why we do not trade with most non-human species, rather than enslaving/killing them.
(For farm animals, their labor is useful, but the stuff lots of humans want from animals fundamentally requires expropriation/violating farm animal property rights)
A good scenario for what happens if we fail is at minimum the intelligence curse scenario elaborated on by Rudolf Lane and Luke Drago below:
Note—this was written kinda quickly, so might be a bit less tactful than I would write if I had more time.
Making a quick reply here after binge listening to three Epoch-related podcasts in the last week, and I basically think my original perspective was vindicated. It was kinda interesting to see which points were repeated or phrased a different way—would recommend if your interested in the topic.
The initial podcast with Jaime, Ege, and Tamay. This clearly positions the Epoch brain trust as between traditional academia and the AI Safety community (AISC). tl;dr—academia has good models but doesn’t take ai seriously, and AISC the opposite (from Epoch’s PoV)
The ‘debate’ between Matthew and Ege. This should have clued people in, because while full of good content, by the last hour/hour and half it almost seemed to turn into ‘openly mocking and laughing’ at AISC, or at least the traditional arguments. I also don’t buy those arguments, but I feel like the reaction Matthew/Ege have shows that they just don’t buy the root AISC claims.
The recent podcast Dwarkesh with Ege & Tamay. This is the best of the 3, but probably also best listened too after the first too, since Dwarkesh actually pushes back on quite a few claims, which means Ege & Tamay flush out their views more—personal highlight was what the reference class for AI Takeover actually means.
Basically, the Mechanize cofounders don’t agree at all with ‘AI Safety Classic’, I am very confident that they don’t buy the arguments at all, that they don’t identify with the community, and somewhat confident that they don’t respect the community or its intellectual output that much.
Given that their views are: a) AI will be a big deal soon (~a few decades), b) returns to AI will be very large, c) Alignment concerns/AI risks are overrated, and d) Other people/institutions aren’t on the ball, then starting an AI Start-up seems to make sense.
What is interesting to note, and one I might look into in the future, is just how much these differences in expectation of AI depend on differences in worldview, rather than differences in technical understanding of ML or understanding of how the systems work on a technical level.
So why are people upset?
Maybe they thought the Epoch people were more part of the AISC than they actually were? Seems like the fault of the people believe this, not Epoch or the Mechanize founders.
Maybe people are upset that Epoch was funded by OpenPhil, and this seems to have lead to ‘AI acceleration’? I think that’s plausible, but Epoch has still produced high-quality reports and information, which OP presumably wanted them to do. But I don’t think equating EA == OP, or anyone funded by OP, is a useful concept to me.
Maybe people are upset at any progress in AI capabilities. But that assumes that Mechanize will be successful in its aims, not guaranteed. It also seems to reify the concept of ‘capabilities’ as one big thing which i don’t think makes sense. Making a better Stockfish, or a better AI for FromSoft bosses does not increase x-risk, for instance.
Maybe people think that the AI Safety Classic arguments are just correct and therefore people taking actions other than it. But then many actions seem bad by this criteria all the time, so odd this would provoke such a reaction. I also don’t think EA should hang its hat on ‘AI Safety Classic’ arguments being correct anyway.
Probably some mix of it. I personally remain not that upset because a) I didn’t really class Epoch as ‘part of the community’, b) I’m not really sure I’m ‘part of the community’ either and c) my views are at least somewhat similar to the Epoch set above, though maybe not as far in their direction, so I’m not as concerned object-level either.
To steelman this:
Even assuming OP funding != EA, one still might consider OP funding to count as funding from the AI Safety Club (TM), and for the Mechanize critics to be speaking in their capacity as members of the AISC rather than of EA. Being upset that AISC money supported development of people who are now working to accelerate AI seems understandable to me.
Epoch fundraised on the Forum in early 2023 and solicited applications for employment on the Forum as recently as December 2024. Although I don’t see any specific references to the AISC in those posts, it wouldn’t be unreasonable to assume some degree of alignment from its posting of fundraising and recruitment asks on the Forum without any disclaimer. (However, I haven’t heard a good reason to impute Epoch’s actions to the Mechanize trio specifically.)
My own take on AI Safety Classic arguments is I’ve become convinced by o3/Sonnet 3.7 that the alignment is very easy hypothesis is looking a lot shakier than it used to be, and I suspect future capabilities progress is likely to be at best neutral, and probably worse for alignment being very easy.
I do think you can still remain optimistic based on other cases, but a pretty core crux is I think alignment does need to be solved if AIs are able to automate the economy, and this is pretty robust to variations on what happens with AI.
The big reason for this is that once your labor is valueless, but your land/capital isn’t, you have fundamentally knocked out a load-bearing pillar of the argument that expropriation is less useful than trade.
This is to a first approximation why we do not trade with most non-human species, rather than enslaving/killing them.
(For farm animals, their labor is useful, but the stuff lots of humans want from animals fundamentally requires expropriation/violating farm animal property rights)
A good scenario for what happens if we fail is at minimum the intelligence curse scenario elaborated on by Rudolf Lane and Luke Drago below:
https://intelligence-curse.ai/defining/