The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.
The thing is, the author of that post kind of agrees with you, In other places he has noted the probability of AI extinction as being 1, and is desperately trying to come up with any way to prevent it.
On the other hand, I think the model of AI put forward in that post is absurdly unlikely, and the risk of AI extinction is orders of magnitude lower. Ai will not be a single minded fanatic utilitarian focused on a fixed goal, and is likely to absorb at least a little bit of our values.
Oh, no, to be clear I find the post extremely unpersuasive—I am interested in it only insofar as it seems to represent received wisdom within the EA community.