From an AI safety perspective, the algorithms that create the feeds that social media users see do have some properties that make them potentially more concerning than most AI applications:
The top capabilities are likely to be concentrated rather than distributed. For example, very few actors in the near future are likely to invest resources in such algorithms in a similar scale to Facebook.
The feed-creation-solution (or policy, in reinforcement learning terminology) being searched for has a very rich real-world action space (e.g. showing some post X to some user Y, where Y is any person from a set of 3 billion FB users).
The social media company is incentivized to find a policy that maximizes users’ time-spent over a long time horizon (rather than using a very small discount factor).
Early failures/deception-attempts may be very hard to detect, especially if the social media company itself is not on the lookout for such failures.
These properties seem to make it less likely that relevant people would see sufficiently alarming small-scale failures before the point where some AI systems pose existential risks.
From an AI safety perspective, the algorithms that create the feeds that social media users see do have some properties that make them potentially more concerning than most AI applications:
The top capabilities are likely to be concentrated rather than distributed. For example, very few actors in the near future are likely to invest resources in such algorithms in a similar scale to Facebook.
The feed-creation-solution (or policy, in reinforcement learning terminology) being searched for has a very rich real-world action space (e.g. showing some post X to some user Y, where Y is any person from a set of 3 billion FB users).
The social media company is incentivized to find a policy that maximizes users’ time-spent over a long time horizon (rather than using a very small discount factor).
Early failures/deception-attempts may be very hard to detect, especially if the social media company itself is not on the lookout for such failures.
These properties seem to make it less likely that relevant people would see sufficiently alarming small-scale failures before the point where some AI systems pose existential risks.