The short-term, dopamine-driven feedback loops that we have created are destroying how society works.
That’s a very strong statement, and I don’t think it’s warranted.
My understanding is that research suggests that the link between digital technology use and reduced well-being is exaggerated. See, e.g. this paper:
The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.
Similarly, it’s been suggested that social media use drives polarization, but the evidence for that is unclear (with some studies finding evidence against).
“Recommender systems” is an extremely broad category, and I think this discussion would benefit from being more concrete, and maybe also from narrowing it down. It’s not obvious to me how strong the link between improving on standard recommender systems and AGI alignment is, for instance. It may be better to choose one of these tasks as the primary focus initially.
With regards to standard recommender systems, many of those aren’t directly focused on increasing well-being, but rather on, e.g. increasing epistemic standards, preventing fraud, etc. Those things may of course indirectly increase well-being, but I think it may be better to think in terms of proximate aims.
There’s been quite a lot written on better recommender or reputation systems, and people have had high hopes (see, e.g. the book The Reputation Society). While some recommendation systems are very successful (e.g. Google) it also seems to me that many of these hopes haven’t materialized.
A new article (referring to this new paper) claims that New York Times’ claims about algorithmic radicalization are flawed (the OP links to a NYT article on such issues):
By looking at recommendation flows between various political orientations and subcultures, we show how YouTube’s late 2019 algorithm is not a radicalization pipeline, but in fact
Removes almost all recommendations for conspiracy theorists, provocateurs and white Identitarians
Benefits mainstream partisan channels such as Fox News and Last Week Tonight
That’s a very strong statement, and I don’t think it’s warranted.
My understanding is that research suggests that the link between digital technology use and reduced well-being is exaggerated. See, e.g. this paper:
Similarly, it’s been suggested that social media use drives polarization, but the evidence for that is unclear (with some studies finding evidence against).
“Recommender systems” is an extremely broad category, and I think this discussion would benefit from being more concrete, and maybe also from narrowing it down. It’s not obvious to me how strong the link between improving on standard recommender systems and AGI alignment is, for instance. It may be better to choose one of these tasks as the primary focus initially.
With regards to standard recommender systems, many of those aren’t directly focused on increasing well-being, but rather on, e.g. increasing epistemic standards, preventing fraud, etc. Those things may of course indirectly increase well-being, but I think it may be better to think in terms of proximate aims.
There’s been quite a lot written on better recommender or reputation systems, and people have had high hopes (see, e.g. the book The Reputation Society). While some recommendation systems are very successful (e.g. Google) it also seems to me that many of these hopes haven’t materialized.
A new article (referring to this new paper) claims that New York Times’ claims about algorithmic radicalization are flawed (the OP links to a NYT article on such issues):
Also on Marginal Revolution.