This post seems to be getting a lot of downvotes. But its arguments seem reasonably well thought out and its obvious that the author is reasonably knowledgeable about the relevant topics. So it’s not clear where the downvotes are coming from. Why does this post merits this much negative attention?
I saw on the LessWrong crosspost someone commented that this post was downvoted so much because of the title. But c’mon guys.… we shouldn’t downvote things based on the title. This is basic stuff. Besides, I find it hard to believe that a pro-AI post with this title would be so heavily downvoted. Like, if the post was titled “The probability that AGI will be developed by 2043 is 1” someone might critique the title in the comments, or something, but I find it hard to believe it would be given the same treatment as this post (i.e. I find it hard to believe that it would be downvoted to the depths of hell).
I encourage anyone who takes issue with this post to explain why before downvoting. Because this is one of the few well-thought out posts on this forum that is critical of mainstream AI timelines. For it to get this many downvotes for seemingly no reason is a bad look. Seriously. It makes it look like EA has an axe to grind with AI skeptics (which is increasingly the impression I’ve been getting...). It seems like every AI post is either unrealistically alarmist or about PR strategies to convince people to also be unrealistically alarmist (which is suspicious, because if the arguments for AI’s alleged danger were good arguments, then you’d think that a marketing campaign would be unnecessary).
Thank you. I was also very confused and disappointed, especially because the downvotes did not come with comments. In fact from my follow up post you will see that I have been begging for someone to tell me what’s wrong with the argument. Yann LeCun and Christopher Manning on Twitter have not been able to.
So I am happy with substantive and convincing counter arguments, which I will of course try to answer. In the best tradition of the scientific method.
To put it more bluntly, this is infinitely wrong, such that we can immediately conclude either clickbait happened, or he literally doesn’t understand why overconfidence is so bad. And both are worthy of strong downvotes. I notably do not see the alignment community, or most anti-AGI risk arguments being this bad.
This post seems to be getting a lot of downvotes. But its arguments seem reasonably well thought out and its obvious that the author is reasonably knowledgeable about the relevant topics. So it’s not clear where the downvotes are coming from. Why does this post merits this much negative attention?
I saw on the LessWrong crosspost someone commented that this post was downvoted so much because of the title. But c’mon guys.… we shouldn’t downvote things based on the title. This is basic stuff. Besides, I find it hard to believe that a pro-AI post with this title would be so heavily downvoted. Like, if the post was titled “The probability that AGI will be developed by 2043 is 1” someone might critique the title in the comments, or something, but I find it hard to believe it would be given the same treatment as this post (i.e. I find it hard to believe that it would be downvoted to the depths of hell).
I encourage anyone who takes issue with this post to explain why before downvoting. Because this is one of the few well-thought out posts on this forum that is critical of mainstream AI timelines. For it to get this many downvotes for seemingly no reason is a bad look. Seriously. It makes it look like EA has an axe to grind with AI skeptics (which is increasingly the impression I’ve been getting...). It seems like every AI post is either unrealistically alarmist or about PR strategies to convince people to also be unrealistically alarmist (which is suspicious, because if the arguments for AI’s alleged danger were good arguments, then you’d think that a marketing campaign would be unnecessary).
Thank you. I was also very confused and disappointed, especially because the downvotes did not come with comments. In fact from my follow up post you will see that I have been begging for someone to tell me what’s wrong with the argument. Yann LeCun and Christopher Manning on Twitter have not been able to.
So I am happy with substantive and convincing counter arguments, which I will of course try to answer. In the best tradition of the scientific method.
To put it more bluntly, this is infinitely wrong, such that we can immediately conclude either clickbait happened, or he literally doesn’t understand why overconfidence is so bad. And both are worthy of strong downvotes. I notably do not see the alignment community, or most anti-AGI risk arguments being this bad.
Sorry I was trying for a dramatic title. I miscalculated with the audience. My arguments are, however, sincere. I will change my title
Thank you, I’ve removed the downvotes despite still disagreeing. This is much, much less clickbaity.
Thank you for pointing out my miscalculation.
Yeah, I’ve downvoted it because of the title. Assigning a probability of zero or one is bad news for epistemological worth.
I learnt my lesson. No flamboyant titles.