I would just like to point out three “classical EA” arguments for taking recommender systems very seriously.
1) The dangerousness of AGI has been argued to be orthogonal from the purpose of AGI, as illustrated by the paperclip maximizers. If you accept this “orthogonality thesis” and if you are concerned about AGI, then you should be concerned about the most sophisticated maximization algorithms. Recommender systems seem to be today’s most sophisticated maximization algorithms (a lot more money and computing power has been invested in optimizing recommender systems than in GPT-3). Given the enormous economic incentives, we should probably not discard the probability that they will remain the most sophisticated maximization algorithms in the future.
As a result, arguments of the form “I don’t see how recommender systems can pose an existential threat” seem akin to arguments of the form “I don’t see how AGI can pose an existential threat”.
(of course, if you reject the latter, I can see why you could reject the former 🙂)
2) Yudkowsky argues that “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Today’s recommender systems are typical examples of something “that people conclude too early that they understand it”. Such algorithms learn from enormous amounts of data which will definitely bias them in ways that no one can understand, since no one can view even a iota of what the YouTube algorithm sees. After all, YouTube receives 500 hours of new video per minute (!!), which it processes at least for copyrights, hate speech filtering and automated captioning.
As a result, arguments of the form “I don’t think the YouTube recommender system is intelligent/sophisticated” might be signs that, perhaps, you may be underestimating today’s algorithms. If so, then you might be prey to Yudkowsky’s “greatest danger”. At the very least, discarding the dangerousness of large-scale algorithms without an adequate understanding of them should probably be regarded as a bad habit.
3) Toby Ord’s latest book stresses the problem of risk factors. Typically, if everybody cared about political scandals while a deadly pandemic (much worse than COVID-19) is going on, then, surely, the probability of mitigating pandemic risks will be greatly diminish. Arguably, recommender systems are major risk factors, because they point billions of individuals’ attentions away from the most pressing problems. Including the attention of the brightest of us.
Bill Gates seems to have given a lot of importance to the risk factor of exposure to poor information, or to the lack of quality information, as his foundation has been investing a lot in “solutions journalism”. Perhaps more interestingly still, he has decided to be a YouTuber himself. His channel has 2.3M views (!!) and 450 videos (!!). He publishes several videos per week, especially during this COVID-19 pandemic, probably because he considers that the battle of information is a major cause area! At the very least, he seems to believe that this huge investment is worth this (very valuable) time.
In particular, arguments of the form “I don’t see how recommender systems can pose an existential threat” are at least as invalid as “I don’t see how AGI can pose an existential threat”
Hold on for a second here. AGI is (by construction) capable of doing everything a recommender system can do plus presumably other things, so it cannot be the case that arguments for AGI posing an existential threat is necessarily weaker than recommender systems posing an existential threat.
NB: I’ve edited the sentence to clarify what I meant.
The argument here is more that recommender systems are maximization algorithms, and that, if you buy the “orthogonality thesis”, there is no reason to think that there cannot go AGI. In particular, you should not judge the capability of an algorithm by the simplicity of the task it is given.
Of course, you may reject the orthogonality thesis. If so, please ignore the first argument.
I would just like to point out three “classical EA” arguments for taking recommender systems very seriously.
1) The dangerousness of AGI has been argued to be orthogonal from the purpose of AGI, as illustrated by the paperclip maximizers. If you accept this “orthogonality thesis” and if you are concerned about AGI, then you should be concerned about the most sophisticated maximization algorithms. Recommender systems seem to be today’s most sophisticated maximization algorithms (a lot more money and computing power has been invested in optimizing recommender systems than in GPT-3). Given the enormous economic incentives, we should probably not discard the probability that they will remain the most sophisticated maximization algorithms in the future.
As a result, arguments of the form “I don’t see how recommender systems can pose an existential threat” seem akin to arguments of the form “I don’t see how AGI can pose an existential threat”.
(of course, if you reject the latter, I can see why you could reject the former 🙂)
2) Yudkowsky argues that “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Today’s recommender systems are typical examples of something “that people conclude too early that they understand it”. Such algorithms learn from enormous amounts of data which will definitely bias them in ways that no one can understand, since no one can view even a iota of what the YouTube algorithm sees. After all, YouTube receives 500 hours of new video per minute (!!), which it processes at least for copyrights, hate speech filtering and automated captioning.
As a result, arguments of the form “I don’t think the YouTube recommender system is intelligent/sophisticated” might be signs that, perhaps, you may be underestimating today’s algorithms. If so, then you might be prey to Yudkowsky’s “greatest danger”. At the very least, discarding the dangerousness of large-scale algorithms without an adequate understanding of them should probably be regarded as a bad habit.
3) Toby Ord’s latest book stresses the problem of risk factors. Typically, if everybody cared about political scandals while a deadly pandemic (much worse than COVID-19) is going on, then, surely, the probability of mitigating pandemic risks will be greatly diminish. Arguably, recommender systems are major risk factors, because they point billions of individuals’ attentions away from the most pressing problems. Including the attention of the brightest of us.
Bill Gates seems to have given a lot of importance to the risk factor of exposure to poor information, or to the lack of quality information, as his foundation has been investing a lot in “solutions journalism”. Perhaps more interestingly still, he has decided to be a YouTuber himself. His channel has 2.3M views (!!) and 450 videos (!!). He publishes several videos per week, especially during this COVID-19 pandemic, probably because he considers that the battle of information is a major cause area! At the very least, he seems to believe that this huge investment is worth this (very valuable) time.
Hold on for a second here. AGI is (by construction) capable of doing everything a recommender system can do plus presumably other things, so it cannot be the case that arguments for AGI posing an existential threat is necessarily weaker than recommender systems posing an existential threat.
NB: I’ve edited the sentence to clarify what I meant.
The argument here is more that recommender systems are maximization algorithms, and that, if you buy the “orthogonality thesis”, there is no reason to think that there cannot go AGI. In particular, you should not judge the capability of an algorithm by the simplicity of the task it is given.
Of course, you may reject the orthogonality thesis. If so, please ignore the first argument.