Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven’t thought deeply enough about the argument to evaluate how strong it is):
We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and the skeptic group’s median date is 2450—405 years later.
[Reasons of the ~400 year discrepancy:]
● There may still be a “long tail” of highly important tasks that require humans, similarto what has happened with self-driving cars. So, even if AI can do >95% of humancognitive tasks, many important tasks will remain.
● Consistent with Moravec’s paradox, even if AI has advanced cognitive abilities it willlikely take longer for it to develop advanced physical capabilities. And the latter wouldbe important for accumulating power over resources in the physical world.
● AI may run out of relevant training data to be fully competitive with humans in alldomains. In follow-up interviews, two skeptics mentioned that they would updatetheir views on AI progress if AI were able to train on sensory data in ways similar tohumans. They expected that gains from reading text would be limited.
● Even if powerful AI is developed, it is possible that it will not be deployed widely,because it is not cost-effective, because of societal decision-making, or for other reasons.
And, when it comes to outcomes from AI, skeptics tended to put more weight on possibilities such as
● AI remains more “tool”-like than “agent”-like, and therefore is more similar totechnology like the internet in terms of its effects on the world.
● AI is agent-like but it leads to largely positive outcomes for humanity because it isadequately controlled by human systems or other AIs, or it is aligned with humanvalues.
● AI and humans co-evolve and gradually merge in a way that does not cleanly fit theresolution criteria of our forecasting questions.
● AI leads to a major collapse of human civilization (through large-scale death events,wars, or economic disasters) but humanity recovers and then either controls or doesnot develop AI.
● Powerful AI is developed but is not widely deployed, because of coordinated humandecisions, prohibitive costs to deployment, or some other reason
either unconvincing on the object level, or because I suspect that the sceptics haven’t thought deeply enough about the argument to evaluate how strong it is
The post states that the skeptics spent 80 hours researching the topics, and were actively engaged with concerened people. For the record, I have probably spent hundreds of hours thinking about the topic, and I think the points they raise are pretty good. These are high quality arguments: you just disagree with them.
I think this post pretty much refutes the idea that if skeptics just “thought deeply” they would change their minds. It very much comes down to principled disagreement on the object level issues.
Thanks for your work on this, super interesting!
Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven’t thought deeply enough about the argument to evaluate how strong it is):
[Reasons of the ~400 year discrepancy:]
The post states that the skeptics spent 80 hours researching the topics, and were actively engaged with concerened people. For the record, I have probably spent hundreds of hours thinking about the topic, and I think the points they raise are pretty good. These are high quality arguments: you just disagree with them.
I think this post pretty much refutes the idea that if skeptics just “thought deeply” they would change their minds. It very much comes down to principled disagreement on the object level issues.