This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal.
Ok, well I took that course, and it most definitely did not have that kind of content in it (can you link to a relevant quote?). Better to think of the AI as an unconscious (arbitrary) optimiser, or even an indifferent natural process. There is nothing religious about AI x-risk.
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
The whole field of existential risk is made up of hypotheses that aren’t “testable”, in that there would be no one there to read the data in the event of an existential catastrophe. This doesn’t mean that there is nothing useful that we can say (or do) about existential risk. Regarding AI, we can use lines of scientific evidence and inference based on them (e.g. evolution of intelligence in humans etc). The post you link to provides some justifications for the claims it makes.
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
No one is saying that their inferences are “infallible” (and pretty much everyone I know in EA/AI Safety are open to changing their minds based on evidence and reason). We can do the best we can, that is all. My concern is that that won’t be enough, and there won’t be any second chances. Personally, I don’t value “dying with dignity” all that much (over just dying). I’ll still be dead. I would love it if someone could make a convincing case that there is nothing to worry about here. I’ve not seen anything close.
Are you sure it was that course?!
Doesn’t sound very like it to me.
Yup very sure. AGI Safety Fundamentals by Cambridge.
Ok, well I took that course, and it most definitely did not have that kind of content in it (can you link to a relevant quote?). Better to think of the AI as an unconscious (arbitrary) optimiser, or even an indifferent natural process. There is nothing religious about AI x-risk.
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
The whole field of existential risk is made up of hypotheses that aren’t “testable”, in that there would be no one there to read the data in the event of an existential catastrophe. This doesn’t mean that there is nothing useful that we can say (or do) about existential risk. Regarding AI, we can use lines of scientific evidence and inference based on them (e.g. evolution of intelligence in humans etc). The post you link to provides some justifications for the claims it makes.
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
No one is saying that their inferences are “infallible” (and pretty much everyone I know in EA/AI Safety are open to changing their minds based on evidence and reason). We can do the best we can, that is all. My concern is that that won’t be enough, and there won’t be any second chances. Personally, I don’t value “dying with dignity” all that much (over just dying). I’ll still be dead. I would love it if someone could make a convincing case that there is nothing to worry about here. I’ve not seen anything close.