I’m just seeing this post now and I haven’t read it yet, though I’ll offer my initial impression based on the title of this post because I think it will be informative. Why I’m reacting based on just the title of this post is because I’m aware of how a lot of other people will be triggered by the title of this post before they get to its contents.
The claim that we are in the midgame, specifically, as opposed to the endgame, is contentious and controversial. That’s such a hot take that for a lot of people the only takeaway from this post was the fact that you were willing to make such a bold claim.
I’ve months ago lost count of the
number of people I know now predicting that the absolute deadline for AGI ruin/doom is all but certainly between 2025 and 2030, and is closer to 2025 than 2030. I’m not convinced of any of that, though a rapidly increasing proportion of people involved in AI alignment I know are convinced of all of that.
A lot of our peers might be of the mind that it’s haphazard to spread some message even suggesting we’re only in the midgame, as opposed to the endgame, as if there’s so much time left to solve AI alignment.
AI timelines and trajectories, of all things, have become a shockingly sensitive topic to discuss in the AI alignment community.
I’m just seeing this post now and I haven’t read it yet, though I’ll offer my initial impression based on the title of this post because I think it will be informative. Why I’m reacting based on just the title of this post is because I’m aware of how a lot of other people will be triggered by the title of this post before they get to its contents.
The claim that we are in the midgame, specifically, as opposed to the endgame, is contentious and controversial. That’s such a hot take that for a lot of people the only takeaway from this post was the fact that you were willing to make such a bold claim.
I’ve months ago lost count of the
number of people I know now predicting that the absolute deadline for AGI ruin/doom is all but certainly between 2025 and 2030, and is closer to 2025 than 2030. I’m not convinced of any of that, though a rapidly increasing proportion of people involved in AI alignment I know are convinced of all of that.
A lot of our peers might be of the mind that it’s haphazard to spread some message even suggesting we’re only in the midgame, as opposed to the endgame, as if there’s so much time left to solve AI alignment.
AI timelines and trajectories, of all things, have become a shockingly sensitive topic to discuss in the AI alignment community.