I think this is an interesting topic. However, I downvoted because if you’re going to claim something is the “greatest priority cause,” which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.
(Basically I agree with MichaelStJules’s comment, except I think the analysis need not be quantitative.)
I guessed the post strongly insisted on the scale and neglectedness of short-term AI alignment. But I can dwell more on this. There are now more views on YouTube than searches on Google, 70% of which are results of recommendation. And a few studies show (cited here) suggest that the influence of repeated exposure to some kind of information has a strong effect on beliefs, preferences and habits. Since this has major impacts on all other EA causes, I’d say the scale of the problem is at least that of any other EA cause.
I believe that alignment is extremely neglected in academia and industry, and short-term alignment is still greatly neglected within EA.
The harder point to estimate is tractability. It is noteworthy that Google, Facebook and Twitter have undertaken a lot of measures recently towards more ethical algorithms, which suggest that it may be possible to get them to increase the amount of ethics in their algorithms. The other hard part is technical. While it might be possible to upgrade some videos “by hand”, it seems desirable to have more robust sustainable solutions to robustly beneficial recommendation. I think that having a near-perfect recommender is technically way out of reach (it’s essentially solving AGI safety!). But there are likely numerous small tweaks that can greatly improve how robustly beneficial the recommender system is.
Of course, all of this is a lot more complex to discuss. I’ve only presented a glance of what we discuss in our book or in our podcast. And I’m very much aware of the extent of my ignorance, which is unfortunately huge...
I think this is an interesting topic. However, I downvoted because if you’re going to claim something is the “greatest priority cause,” which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.
(Basically I agree with MichaelStJules’s comment, except I think the analysis need not be quantitative.)
I guessed the post strongly insisted on the scale and neglectedness of short-term AI alignment. But I can dwell more on this. There are now more views on YouTube than searches on Google, 70% of which are results of recommendation. And a few studies show (cited here) suggest that the influence of repeated exposure to some kind of information has a strong effect on beliefs, preferences and habits. Since this has major impacts on all other EA causes, I’d say the scale of the problem is at least that of any other EA cause.
I believe that alignment is extremely neglected in academia and industry, and short-term alignment is still greatly neglected within EA.
The harder point to estimate is tractability. It is noteworthy that Google, Facebook and Twitter have undertaken a lot of measures recently towards more ethical algorithms, which suggest that it may be possible to get them to increase the amount of ethics in their algorithms. The other hard part is technical. While it might be possible to upgrade some videos “by hand”, it seems desirable to have more robust sustainable solutions to robustly beneficial recommendation. I think that having a near-perfect recommender is technically way out of reach (it’s essentially solving AGI safety!). But there are likely numerous small tweaks that can greatly improve how robustly beneficial the recommender system is.
Of course, all of this is a lot more complex to discuss. I’ve only presented a glance of what we discuss in our book or in our podcast. And I’m very much aware of the extent of my ignorance, which is unfortunately huge...