Relatedly, a behaviour I dislike is being repeatedly publicly wrong without changing and acknowledging fault. Mainstream Christianity is guilty of this, though so are many other social movements.
I think if it turns out that short AI timelines are wrong, those with short timelines should acknowledge it and the EA as a whole should seek to understand why we got it so wrong. I will think it odd if those who make repeatedly wrong predictions continue to be taken seriously.
Also, I’d like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?
I think if it turns out that short AI timelines are wrong, those with short timelines should acknowledge it and the EA as a whole should seek to understand why we got it so wrong. I will think it odd if those who make repeatedly wrong predictions continue to be taken seriously.
I think this only applies to people who are VERY confident in short timelines. Say you have a distribution over possible timelines that puts 50% probability on <20 years, and 20% probability on >60 years. This would be a really big deal! It’s a 50% chance of the world wildly changing in 20 years. But having no AGI within 60 years is only a 5x update against this model, hardly a major sin of bad prediction.
Though if someone is eg quitting their job and not getting a pension they probably have a much more extreme distribution, so your point is pretty valid there.
Though if someone is eg quitting their job and not getting a pension they probably have a much more extreme distribution, so your point is pretty valid there.
I’m confused at that implication. I would make bets of that magnitude at substantially lower probabilities than 50%, and in fact have done so historically.
Though maybe “quitting their job and not getting a pension” is meant as a metaphor for “take very big life risks,” whereas to me e.g. quitting Google to join a crypto startup even though I had <20% credence in crypto booming, or explicitly not setting aside retirement monies in my early twenties, both seemed liked pretty comfortable risks at the time, and almost not worth writing about from a risk-taking angle.
Though maybe “quitting their job and not getting a pension” is meant as a metaphor for “take very big life risks,”
That’s fair pushback—a lot of that really doesn’t seem that risky if you’re young and have a very employable skillset. I endorse this rephrasing of my view, thanks
I guess you’re still exposed to SOME increased risk, eg that the tech industry in general becomes much smaller/harder to get into/less well paying, but you’re still exposed to risks like “the US pension system collapses” anyway, so this seems reasonable to mostly ignore. (Unless there’s a good way of buying insurance against this?)
Mainstream Christianity is guilty of this, though so are many other social movements.
All sects of any organized religion ultimately originate from what’s likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they’ve all made the same mistakes. As far as I’m aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.
EA as a whole should seek to understand why we got it so wrong
There isn’t anything like a consensus to the point it’s not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I’m not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.
Also, I’d like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?
Improving forecasting has become relevant to multiple focus areas in EA, so it’s become something of a focus area in itself. There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines.
As far as I’m aware, “short timelines” for such predictions range from a few months to a few years out. I’m not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn’t be too hard to track and catalogue it to get those answers.
Relatedly, a behaviour I dislike is being repeatedly publicly wrong without changing and acknowledging fault. Mainstream Christianity is guilty of this, though so are many other social movements.
I think if it turns out that short AI timelines are wrong, those with short timelines should acknowledge it and the EA as a whole should seek to understand why we got it so wrong. I will think it odd if those who make repeatedly wrong predictions continue to be taken seriously.
Also, I’d like to see more concrete testable short term predictions from those we trust with AI predictions. Are they good forecasters in general? Are they well calibrated or insightful in ways we can test?
I think this only applies to people who are VERY confident in short timelines. Say you have a distribution over possible timelines that puts 50% probability on <20 years, and 20% probability on >60 years. This would be a really big deal! It’s a 50% chance of the world wildly changing in 20 years. But having no AGI within 60 years is only a 5x update against this model, hardly a major sin of bad prediction.
Though if someone is eg quitting their job and not getting a pension they probably have a much more extreme distribution, so your point is pretty valid there.
I’m confused at that implication. I would make bets of that magnitude at substantially lower probabilities than 50%, and in fact have done so historically.
Though maybe “quitting their job and not getting a pension” is meant as a metaphor for “take very big life risks,” whereas to me e.g. quitting Google to join a crypto startup even though I had <20% credence in crypto booming, or explicitly not setting aside retirement monies in my early twenties, both seemed liked pretty comfortable risks at the time, and almost not worth writing about from a risk-taking angle.
That’s fair pushback—a lot of that really doesn’t seem that risky if you’re young and have a very employable skillset. I endorse this rephrasing of my view, thanks
I guess you’re still exposed to SOME increased risk, eg that the tech industry in general becomes much smaller/harder to get into/less well paying, but you’re still exposed to risks like “the US pension system collapses” anyway, so this seems reasonable to mostly ignore. (Unless there’s a good way of buying insurance against this?)
All sects of any organized religion ultimately originate from what’s likely to have been a singular, unified version from when the religion began. Unless any sect has acknowledged what original prophecies in the religion were wrong, they’ve all made the same mistakes. As far as I’m aware, almost no minor sects of any organized religion acknowledge those mistakes any more than the mainstream sects.
There isn’t anything like a consensus to the point it’s not evident that even a majority of the EA/x-risk community has short timelines for artificial general intelligence (AGI). There have been one or more surveys of the AI safety/alignment community on this subject but I’m not aware if there are one or more sets of data cataloguing the timelines of specific agencies in the field.
Improving forecasting has become relevant to multiple focus areas in EA, so it’s become something of a focus area in itself. There are multiple forecasting organizations that specifically focus on existential risks (x-risks) in general and also AI timelines.
As far as I’m aware, “short timelines” for such predictions range from a few months to a few years out. I’m not aware either if whole organizations making AI timeline predictions are logging their predictions the way individual forecasters are. The relevant data may not yet be organized in a way that directly provides a summary track record for the different forecasters in question. Yet much of that data does exist and should be accessible. It wouldn’t be too hard to track and catalogue it to get those answers.