I didnât down vote but it seems like you are attacking a straw man here⌠the book is explicitly focused on the conditional IF anyone builds it. They never claim to know how to build it but simply suggest that it is not unlikely to be built in the future. I donât know in which world you are living but this starting assumption seems pretty plausible to me (and quite a few other people more knowledgeable than me on those topics such as Nobel prize and Turing Award winnersâŚ). If not in 5 then maybe in 50 years.
I would say at this point the burden is on you to make the case that the overall topic is nothing to worry about. Why not write your own book or posts where you let your arguments speak for themselves?
Eliezer Yudkowsky forecasts a 99.5% chance of human extinction from AGI âwell before 2050â, unless we implement his proposed aggressive global moratorium on AI R&D. Yudkowsky deliberately avoids giving more than a vague forecast on AGI, but he often strongly hints at a timeline. For example, in December 2022, he tweeted:
Pouring some cold water on the latest wave of AI hype: I could be wrong, but my guess is that we do *not* get AGI just by scaling ChatGPT, and that it takes *surprisingly* long from here. Parents conceiving today may have a fair chance of their child living to see kindergarten.
In April 2022, when Metaculusâ forecast for AGI was in the 2040s and 2050s, Yudkowsky harshly criticized Metaculus for having too long a timeline and not updating it downwards fast enough.
At some point, the companies rushing headlong to scale AI will cough out something thatâs smarter than humanity. Nobody knows how to calculate when that will happen. My wild guess is that it will happen after zero to two more breakthroughs the size of transformers.
In March 2023, during an interview with Alex Fridman, Fridman asked Yudkowsky what advice he had for young people. Yudkowsky said:
Donât expect it to be a long life. Donât put your happiness into the future. The future is probably not that long at this point.
In that segment, he also said, âwe are not in the shape to frantically at the last minute do decadesâ worth of work.â
After reading these examples, do you still think Yudkowsky only believes that AGI is ânot unlikely to be built in the futureâ, âif not in 5 then maybe in 50 yearsâ?
I didnât comment on the accuracy of individual timelines but emphasized that the main topic of the book is the conditional what if⌠it doesnât really make sense to critique the book at length for something itâs only tangentially touching upon to motivate the relevance of its main topic. And they are not making outrageous claims here if you look at the ongoing discourse and ramping up investments.
Itâs possible to take Yudkowsky seriously even if you are less certain on timelines and outcomes.
It could be an interesting exercise for you to reflect on the origins of your emotional reactions to Yudkowskiâs views.
I think itâs fair to criticize Yudkowsky and Soaresâ belief that there is a very high probability of AGI being created within ~5-20 years because that is a central part of their argument. The purpose of the book is to argue for an aggressive global moratorium on AI R&D. For such a moratorium to make sense, probabilities need to be high and timelines need to be short. If Yudkowsky and Soares believed there was an extremely low chance of AGI being developed within the next few decades, they wouldnât be arguing for the moratorium.
So, I think Oscar is right to notice and critique this part of their argument. I donât think itâs fair to say Oscar is critiquing a straw man.
You can respond with a logical, sensible appeal to the precautionary principle: shouldnât we prepare anyway, just in case? First, I would say that even if this is the correct response, it doesnât make Oscarâs critique wrong or not worth making. Second, I think arguments around whether AGI will be safe or unsafe, easy or hard to align, and what to do to prepare for it â these arguments depend on how specific assumptions on how AGI will be built. So, this is not actually a separate question from the topic Oscar raised in this post.
It would be nice if there were something we could do just in case, to make any potential future AGI system safer or easier to align, but I donât see how we can do this in advance of knowing what technology or science will be used to build AGI. So, the precautionary principle response doesnât add up, either, in my view.
I donât think itâs unreasonable to discuss the appropriateness of particular timelines per se but the fact remains that this is not the purpose or goal of the book. As I acknowledged, short to medium term timelines are helpful for motivating the relevance or importance of the issue. However, I think timelines in the 5 to 50 year range are a very common position now, which means that the book can reasonably use this as a starting point for engaging with its core interest, the conditional what if.
Given this as a backdrop, I think itâs fair to say that the author of this post is engaging in a form of straw manning. He is not simply saying: âlook, the actions suggested are going to far because the situation is not as pressing as they make it out to be, we have more timeâ⌠No, he is claiming that âYudkowsky and Soaresâ Book Is Emptyâ by blaming them for not giving an explicit argument for how to build an AGI. I mean, come on how ironic would it be if the book arguing against the building of these kinds of machine would provide the template for building them?
So, I really fail to see the merit of this kind of critique. I mean you can disagree with the premise that we will be able to build generally intelligent machines in the nearish future but given the trajectory of current developments it seems a little bit far fetched to claim that the book is starting from an unreasonable starting point.
As I said mutliple times now, I am not against having open debate about stuff, I am just trying to explain why I think people are not âbitingâ for this kind of content.
P.S.: If you look at the draft treaty they propose, I think itâs clear that they are not against stopping any and all AI R&D but specifically R&D aimed at ASI. Given the general purpose nature of AI, this will surely limit âAI progressâ but one could very well argue that we already have enough societal catching up to do to where we are at right now. I also think itâs quite important to keep in mind that there is no inherent ârightâ to unrestricted R&D. As soon as any kind of âinnovationâ such as âAI progressâ is also affecting other people, our baseline orientation should be one of balancing interests, which can reasonably include limitations on R&D (e.g., nuclear weapons, human cloning, etc.).
Iâve presented some of my arguments in articles on my Substack, as well as in a philosophy of mind book I wrote addressing topics like âwhat is reasoning/âthinking?â that sadly I havenât been able to get published yet. On my Substack I also have articles on Hinton and others.
You are not addressing the key point of my comment which is regarding the nature of their argument and your straw manning of their position. Why should I take your posts seriously if you feel the need to resort to these kind of tactics?
I am just trying to provide you with some perspective with why people might feel the need to downvote you. If you want people like me to engage (although I didnât downvote, I donât really have an interest in reading your blog), I would recommend meeting us where we are: Concerned about current developments potentially leading to concentration of power or worse and looking for precautionary responses to it. Theoretical arguments are fine but your whole âconfidenceâ vibe is very off putting to me given the situation we find ourselves in.
I didnât down vote but it seems like you are attacking a straw man here⌠the book is explicitly focused on the conditional IF anyone builds it. They never claim to know how to build it but simply suggest that it is not unlikely to be built in the future. I donât know in which world you are living but this starting assumption seems pretty plausible to me (and quite a few other people more knowledgeable than me on those topics such as Nobel prize and Turing Award winnersâŚ). If not in 5 then maybe in 50 years.
I would say at this point the burden is on you to make the case that the overall topic is nothing to worry about. Why not write your own book or posts where you let your arguments speak for themselves?
Eliezer Yudkowsky forecasts a 99.5% chance of human extinction from AGI âwell before 2050â, unless we implement his proposed aggressive global moratorium on AI R&D. Yudkowsky deliberately avoids giving more than a vague forecast on AGI, but he often strongly hints at a timeline. For example, in December 2022, he tweeted:
In April 2022, when Metaculusâ forecast for AGI was in the 2040s and 2050s, Yudkowsky harshly criticized Metaculus for having too long a timeline and not updating it downwards fast enough.
In his July 2023 TED Talk, Yudkowsky said:
In March 2023, during an interview with Alex Fridman, Fridman asked Yudkowsky what advice he had for young people. Yudkowsky said:
In that segment, he also said, âwe are not in the shape to frantically at the last minute do decadesâ worth of work.â
After reading these examples, do you still think Yudkowsky only believes that AGI is ânot unlikely to be built in the futureâ, âif not in 5 then maybe in 50 yearsâ?
I didnât comment on the accuracy of individual timelines but emphasized that the main topic of the book is the conditional what if⌠it doesnât really make sense to critique the book at length for something itâs only tangentially touching upon to motivate the relevance of its main topic. And they are not making outrageous claims here if you look at the ongoing discourse and ramping up investments.
Itâs possible to take Yudkowsky seriously even if you are less certain on timelines and outcomes.
It could be an interesting exercise for you to reflect on the origins of your emotional reactions to Yudkowskiâs views.
I think itâs fair to criticize Yudkowsky and Soaresâ belief that there is a very high probability of AGI being created within ~5-20 years because that is a central part of their argument. The purpose of the book is to argue for an aggressive global moratorium on AI R&D. For such a moratorium to make sense, probabilities need to be high and timelines need to be short. If Yudkowsky and Soares believed there was an extremely low chance of AGI being developed within the next few decades, they wouldnât be arguing for the moratorium.
So, I think Oscar is right to notice and critique this part of their argument. I donât think itâs fair to say Oscar is critiquing a straw man.
You can respond with a logical, sensible appeal to the precautionary principle: shouldnât we prepare anyway, just in case? First, I would say that even if this is the correct response, it doesnât make Oscarâs critique wrong or not worth making. Second, I think arguments around whether AGI will be safe or unsafe, easy or hard to align, and what to do to prepare for it â these arguments depend on how specific assumptions on how AGI will be built. So, this is not actually a separate question from the topic Oscar raised in this post.
It would be nice if there were something we could do just in case, to make any potential future AGI system safer or easier to align, but I donât see how we can do this in advance of knowing what technology or science will be used to build AGI. So, the precautionary principle response doesnât add up, either, in my view.
I donât think itâs unreasonable to discuss the appropriateness of particular timelines per se but the fact remains that this is not the purpose or goal of the book. As I acknowledged, short to medium term timelines are helpful for motivating the relevance or importance of the issue. However, I think timelines in the 5 to 50 year range are a very common position now, which means that the book can reasonably use this as a starting point for engaging with its core interest, the conditional what if.
Given this as a backdrop, I think itâs fair to say that the author of this post is engaging in a form of straw manning. He is not simply saying: âlook, the actions suggested are going to far because the situation is not as pressing as they make it out to be, we have more timeâ⌠No, he is claiming that âYudkowsky and Soaresâ Book Is Emptyâ by blaming them for not giving an explicit argument for how to build an AGI. I mean, come on how ironic would it be if the book arguing against the building of these kinds of machine would provide the template for building them?
So, I really fail to see the merit of this kind of critique. I mean you can disagree with the premise that we will be able to build generally intelligent machines in the nearish future but given the trajectory of current developments it seems a little bit far fetched to claim that the book is starting from an unreasonable starting point.
As I said mutliple times now, I am not against having open debate about stuff, I am just trying to explain why I think people are not âbitingâ for this kind of content.
P.S.: If you look at the draft treaty they propose, I think itâs clear that they are not against stopping any and all AI R&D but specifically R&D aimed at ASI. Given the general purpose nature of AI, this will surely limit âAI progressâ but one could very well argue that we already have enough societal catching up to do to where we are at right now. I also think itâs quite important to keep in mind that there is no inherent ârightâ to unrestricted R&D. As soon as any kind of âinnovationâ such as âAI progressâ is also affecting other people, our baseline orientation should be one of balancing interests, which can reasonably include limitations on R&D (e.g., nuclear weapons, human cloning, etc.).
Iâve presented some of my arguments in articles on my Substack, as well as in a philosophy of mind book I wrote addressing topics like âwhat is reasoning/âthinking?â that sadly I havenât been able to get published yet. On my Substack I also have articles on Hinton and others.
You are not addressing the key point of my comment which is regarding the nature of their argument and your straw manning of their position. Why should I take your posts seriously if you feel the need to resort to these kind of tactics?
I am just trying to provide you with some perspective with why people might feel the need to downvote you. If you want people like me to engage (although I didnât downvote, I donât really have an interest in reading your blog), I would recommend meeting us where we are: Concerned about current developments potentially leading to concentration of power or worse and looking for precautionary responses to it. Theoretical arguments are fine but your whole âconfidenceâ vibe is very off putting to me given the situation we find ourselves in.