Maybe half the community sees it that way. But not the half with all the money and power it seems. There arenât (yet) large resources being put into playing the âoutside gameâ. And there hasnât been anything in the way of EA leadership (OpenPhil, 80k) admitting the error afaik.
Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naĂŻve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/âtheoretical work on AI Safety is no?
These all seem good topics to flesh out further! Is 1 still a âhot takeâ though? I thought this was pretty much the consensus here at this point?
Maybe half the community sees it that way. But not the half with all the money and power it seems. There arenât (yet) large resources being put into playing the âoutside gameâ. And there hasnât been anything in the way of EA leadership (OpenPhil, 80k) admitting the error afaik.
Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naĂŻve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/âtheoretical work on AI Safety is no?