In particular, arguments of the form “I don’t see how recommender systems can pose an existential threat” are at least as invalid as “I don’t see how AGI can pose an existential threat”
Hold on for a second here. AGI is (by construction) capable of doing everything a recommender system can do plus presumably other things, so it cannot be the case that arguments for AGI posing an existential threat is necessarily weaker than recommender systems posing an existential threat.
NB: I’ve edited the sentence to clarify what I meant.
The argument here is more that recommender systems are maximization algorithms, and that, if you buy the “orthogonality thesis”, there is no reason to think that there cannot go AGI. In particular, you should not judge the capability of an algorithm by the simplicity of the task it is given.
Of course, you may reject the orthogonality thesis. If so, please ignore the first argument.
Hold on for a second here. AGI is (by construction) capable of doing everything a recommender system can do plus presumably other things, so it cannot be the case that arguments for AGI posing an existential threat is necessarily weaker than recommender systems posing an existential threat.
NB: I’ve edited the sentence to clarify what I meant.
The argument here is more that recommender systems are maximization algorithms, and that, if you buy the “orthogonality thesis”, there is no reason to think that there cannot go AGI. In particular, you should not judge the capability of an algorithm by the simplicity of the task it is given.
Of course, you may reject the orthogonality thesis. If so, please ignore the first argument.