Mainstream Christianity is guilty of this, though so are many other social movements.
All sects of any organized religion ultimately originate from what’s likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they’ve all made the same mistakes. As far as I’m aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.
EA as a whole should seek to understand why we got it so wrong
There isn’t anything like a consensus to the point it’s not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I’m not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.
Also, I’d like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?
Improving forecasting has become relevant to multiple focus areas in EA, so it’s become something of a focus area in itself. There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines.
As far as I’m aware, “short timelines” for such predictions range from a few months to a few years out. I’m not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn’t be too hard to track and catalogue it to get those answers.
All sects of any organized religion ultimately originate from what’s likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they’ve all made the same mistakes. As far as I’m aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.
There isn’t anything like a consensus to the point it’s not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I’m not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.
Improving forecasting has become relevant to multiple focus areas in EA, so it’s become something of a focus area in itself. There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines.
As far as I’m aware, “short timelines” for such predictions range from a few months to a few years out. I’m not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn’t be too hard to track and catalogue it to get those answers.