Possibly. One trend in YouTube’s recommendations seems to be towards more mainstream content, and EA, x-risks and farm animal welfare/rights aren’t really mainstream topics (animal rights specifically might be considered radical), so any technical contributions to recommender alignment might be used further to the exclusion of these topics and be net-negative.
Advocacy, policy and getting the right people on (ethics) boards might be safer. Maybe writing about the issue for Vox’s Future Perfect could be a good place to start?
Possibly. One trend in YouTube’s recommendations seems to be towards more mainstream content, and EA, x-risks and farm animal welfare/rights aren’t really mainstream topics (animal rights specifically might be considered radical), so any technical contributions to recommender alignment might be used further to the exclusion of these topics and be net-negative.
Advocacy, policy and getting the right people on (ethics) boards might be safer. Maybe writing about the issue for Vox’s Future Perfect could be a good place to start?