For people who haven’t been around for a while, the history of AI x-risk as a cause area is actually one of a long struggle for legitimacy and significant funding. 20 years ago, only Eliezer Yudkowsky and a handful of other people even recognised there was a problem. 15 years ago, there was a whole grass-roots movement of people (centred around the Overcoming Bias and LessWrong websites) earning to give to support MIRI (then the Singularity Institute), as they were chronically underfunded. 10 years ago, Holden Karnofsky was arguing against it being a big problem. The fact that AI x-risk now has a lot of legitimacy and funding is a result of the arguments for taking it seriously winning many long and hard battles. Recently, huge prizes were announced for arguments that it wasn’t a (big) risk. Before their cancellation, not much was produced in the way of good arguments imo. OpenPhil are now planning on running a similar competition. If there really are great arguments against AI x-risk being a thing, then they should come to light in response.
For those who want to deepen their knowledge of AI x-risk, I recommend reading the AGI Safety Fundamentalssyllabus. Or better yet, signing up for the next iteration of the course (deadline to apply is 5th Jan).
I took that course and gave EA a benefit of the doubt. I was exposed to arguments about AI safety before I knew much about AI and it was very confusing stuff and there is a lot that didn’t add up but I still gave the EA take the benefit of the doubt since I didn’t know much about AI and thought that there was something that I just didn’t understand. I then spent a lot of time actually learning about AI and trying to understand what experts in the field think about what AI can actually do and what lines of research they are pursuing. Suffice it to say that the material on AGI safety didn’t hold up well after this process. The AI x-risk concerns seem very quasi religious. The story is that man will create an omnipresent, omniscient and omnipotent being. Such beings are known as God in religious contexts. More moderate claims have that a being or a multiplicity of them that possess at least one of these characteristics will be created which is more akin to gods in polytheistic religions. This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal. It’s very similar to religious creation stories but with the role of creator reversed but the outcome is the same, Armageddon. Given that the current prophecy seems to indicate that the apocalypse will come by 2030 it seems like there is opportunity for a research study to be done on EA similar to that done of the Seekers. Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. There will also be a selection bias towards people who are prone to this kind of ideological beliefs similar to how some people are just prone to conspiracy theories like QAnon albeit that AI x-risk is a lot more sophisticated. At least the people who believe in mainstream religions are upfront that their beliefs are based on faith. The AI x-risk devotees also base their beliefs on faith but its couched in incomprehensible rationality newspeak, philosophy, absurd extrapolations and theoretical mathematical abstractions that cannot be realized in practical physical systems to give the illusion that it’s more than that.
Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs.
I’d be interested in whether you actually tried that, and whether it’s possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don’t want to discuss your counterarguments with anybody.
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn’t have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. This is why the conflict of interest at the source of funding pushing a certain belief is so pernicious because it really can affect beliefs downstream.
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence.
I’d still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are “loads of arguments”, this shouldn’t be hard. Somebody asked for something like that here, and there aren’t so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives?
I think so—see footnote 2 of the LessWrong post linked above.
Why not just go look for differing perspectives yourself?
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs
That this “known human characteristic” strongly applies to people working on AI safety is, up to now, nothing more than a claim.
(I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU).
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the ‘pandemic ideology’ of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side’s reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I witnessed this lack of curiosity in my own cohort that completed AGISF. … They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person.
I don’t work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don’t have to worry about things. So I can guarantee that I’d be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
Sorry but I’m not going to do your homework for you. If you want to find arguments for or against AI safety go look for them yourself. If want to actually find out what leading AI researchers think you can find that as well. I have no special insight over the many people who have expertise in the field of AI so I am not the best source and my conclusions could be wrong. I’m still learning more all the time as I increase my expertise in AI. If you have done your homework and have come to the conclusion that AI safety as a field is warranted then well and good. If you are looking for someone who will argue with you in order to convince you one way or another then I hope someone is willing to do that for you either way good luck!
If you don’t want to justify your claims, that’s perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don’t act as if it’s my “homework” to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like “quasi religious”, “I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs”, “just prone to conspiracy theories like QAnon”, while at the same time you are unwilling or unable to name any examples for “what experts in the field think about what AI can actually do”.
This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal.
Ok, well I took that course, and it most definitely did not have that kind of content in it (can you link to a relevant quote?). Better to think of the AI as an unconscious (arbitrary) optimiser, or even an indifferent natural process. There is nothing religious about AI x-risk.
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
The whole field of existential risk is made up of hypotheses that aren’t “testable”, in that there would be no one there to read the data in the event of an existential catastrophe. This doesn’t mean that there is nothing useful that we can say (or do) about existential risk. Regarding AI, we can use lines of scientific evidence and inference based on them (e.g. evolution of intelligence in humans etc). The post you link to provides some justifications for the claims it makes.
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
No one is saying that their inferences are “infallible” (and pretty much everyone I know in EA/AI Safety are open to changing their minds based on evidence and reason). We can do the best we can, that is all. My concern is that that won’t be enough, and there won’t be any second chances. Personally, I don’t value “dying with dignity” all that much (over just dying). I’ll still be dead. I would love it if someone could make a convincing case that there is nothing to worry about here. I’ve not seen anything close.
For people who haven’t been around for a while, the history of AI x-risk as a cause area is actually one of a long struggle for legitimacy and significant funding. 20 years ago, only Eliezer Yudkowsky and a handful of other people even recognised there was a problem. 15 years ago, there was a whole grass-roots movement of people (centred around the Overcoming Bias and LessWrong websites) earning to give to support MIRI (then the Singularity Institute), as they were chronically underfunded. 10 years ago, Holden Karnofsky was arguing against it being a big problem. The fact that AI x-risk now has a lot of legitimacy and funding is a result of the arguments for taking it seriously winning many long and hard battles. Recently, huge prizes were announced for arguments that it wasn’t a (big) risk. Before their cancellation, not much was produced in the way of good arguments imo. OpenPhil are now planning on running a similar competition. If there really are great arguments against AI x-risk being a thing, then they should come to light in response.
For those who want to deepen their knowledge of AI x-risk, I recommend reading the AGI Safety Fundamentals syllabus. Or better yet, signing up for the next iteration of the course (deadline to apply is 5th Jan).
I took that course and gave EA a benefit of the doubt. I was exposed to arguments about AI safety before I knew much about AI and it was very confusing stuff and there is a lot that didn’t add up but I still gave the EA take the benefit of the doubt since I didn’t know much about AI and thought that there was something that I just didn’t understand. I then spent a lot of time actually learning about AI and trying to understand what experts in the field think about what AI can actually do and what lines of research they are pursuing. Suffice it to say that the material on AGI safety didn’t hold up well after this process. The AI x-risk concerns seem very quasi religious. The story is that man will create an omnipresent, omniscient and omnipotent being. Such beings are known as God in religious contexts. More moderate claims have that a being or a multiplicity of them that possess at least one of these characteristics will be created which is more akin to gods in polytheistic religions. This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal. It’s very similar to religious creation stories but with the role of creator reversed but the outcome is the same, Armageddon. Given that the current prophecy seems to indicate that the apocalypse will come by 2030 it seems like there is opportunity for a research study to be done on EA similar to that done of the Seekers. Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. There will also be a selection bias towards people who are prone to this kind of ideological beliefs similar to how some people are just prone to conspiracy theories like QAnon albeit that AI x-risk is a lot more sophisticated. At least the people who believe in mainstream religions are upfront that their beliefs are based on faith. The AI x-risk devotees also base their beliefs on faith but its couched in incomprehensible rationality newspeak, philosophy, absurd extrapolations and theoretical mathematical abstractions that cannot be realized in practical physical systems to give the illusion that it’s more than that.
I’d be interested in whether you actually tried that, and whether it’s possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don’t want to discuss your counterarguments with anybody.
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn’t have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. This is why the conflict of interest at the source of funding pushing a certain belief is so pernicious because it really can affect beliefs downstream.
I’d still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are “loads of arguments”, this shouldn’t be hard. Somebody asked for something like that here, and there aren’t so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think so—see footnote 2 of the LessWrong post linked above.
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
That this “known human characteristic” strongly applies to people working on AI safety is, up to now, nothing more than a claim.
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the ‘pandemic ideology’ of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side’s reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I don’t work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don’t have to worry about things. So I can guarantee that I’d be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
Here are a couple of links:
What does it mean to align AI with human values?
The implausibility of intelligence explosion
Sorry but I’m not going to do your homework for you. If you want to find arguments for or against AI safety go look for them yourself. If want to actually find out what leading AI researchers think you can find that as well. I have no special insight over the many people who have expertise in the field of AI so I am not the best source and my conclusions could be wrong. I’m still learning more all the time as I increase my expertise in AI. If you have done your homework and have come to the conclusion that AI safety as a field is warranted then well and good. If you are looking for someone who will argue with you in order to convince you one way or another then I hope someone is willing to do that for you either way good luck!
If you don’t want to justify your claims, that’s perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don’t act as if it’s my “homework” to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like “quasi religious”, “I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs”, “just prone to conspiracy theories like QAnon”, while at the same time you are unwilling or unable to name any examples for “what experts in the field think about what AI can actually do”.
Are you sure it was that course?!
Doesn’t sound very like it to me.
Yup very sure. AGI Safety Fundamentals by Cambridge.
Ok, well I took that course, and it most definitely did not have that kind of content in it (can you link to a relevant quote?). Better to think of the AI as an unconscious (arbitrary) optimiser, or even an indifferent natural process. There is nothing religious about AI x-risk.
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
The whole field of existential risk is made up of hypotheses that aren’t “testable”, in that there would be no one there to read the data in the event of an existential catastrophe. This doesn’t mean that there is nothing useful that we can say (or do) about existential risk. Regarding AI, we can use lines of scientific evidence and inference based on them (e.g. evolution of intelligence in humans etc). The post you link to provides some justifications for the claims it makes.
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
No one is saying that their inferences are “infallible” (and pretty much everyone I know in EA/AI Safety are open to changing their minds based on evidence and reason). We can do the best we can, that is all. My concern is that that won’t be enough, and there won’t be any second chances. Personally, I don’t value “dying with dignity” all that much (over just dying). I’ll still be dead. I would love it if someone could make a convincing case that there is nothing to worry about here. I’ve not seen anything close.