As a more EA example, we can consider the case of the Malaria Consortium (or other GiveWell top charities). Much of philanthropy could become a lot more effective if donators were better informed. An aligned recommender could stress this fact, and recommend effective charities, as opposed to appealing ineffective ones. Thousands, if not hundreds of thousands of lives, could probably be saved by exposing potential donators to better quality information.
Why would an aligned recommender stress this fact? Is this something we could have much influence over?
The importance of ethics in YouTube recommendation seems to have grown significantly over the last two years (see this for instance). This suggests that there are pressures both from outside and inside that may be effective in making YouTube care about recommending quality information.
Now, YouTube’s effort seems to have been mostly about removing (or less recommending) undesirable contents so far (though as an outsider it’s hard for me to say). Perhaps they can be convinced to also recommend more desirable contents too.
Possibly. One trend in YouTube’s recommendations seems to be towards more mainstream content, and EA, x-risks and farm animal welfare/rights aren’t really mainstream topics (animal rights specifically might be considered radical), so any technical contributions to recommender alignment might be used further to the exclusion of these topics and be net-negative.
Advocacy, policy and getting the right people on (ethics) boards might be safer. Maybe writing about the issue for Vox’s Future Perfect could be a good place to start?
Why would an aligned recommender stress this fact? Is this something we could have much influence over?
The importance of ethics in YouTube recommendation seems to have grown significantly over the last two years (see this for instance). This suggests that there are pressures both from outside and inside that may be effective in making YouTube care about recommending quality information.
Now, YouTube’s effort seems to have been mostly about removing (or less recommending) undesirable contents so far (though as an outsider it’s hard for me to say). Perhaps they can be convinced to also recommend more desirable contents too.
Possibly. One trend in YouTube’s recommendations seems to be towards more mainstream content, and EA, x-risks and farm animal welfare/rights aren’t really mainstream topics (animal rights specifically might be considered radical), so any technical contributions to recommender alignment might be used further to the exclusion of these topics and be net-negative.
Advocacy, policy and getting the right people on (ethics) boards might be safer. Maybe writing about the issue for Vox’s Future Perfect could be a good place to start?