What do you think are the most pressing āmainstreamā ethical issues in AI? (fairness, interpretability, privacy, attention design, etc.)
How do you think the public interest tech movement (which encompasses tech-related public policy, doing software development or data science for social good, etc.) could be more effective?
Relatedly, this is why people who canāt immediately work for an EA-aligned org on ālong-termā AI issues can build both useful career capital and do useful work by working in more general AI policy.
For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.
Thatās a really good suggestion! Do you know of any attempts at cause prioritization in near-term AI policy? I think most AI policy wonks focus on near-term issues, so publishing a stack ranking could be really influential.
Iām not sure whom this ranking would be relevant for, though. If youāre interested in basic research on AI ethics, youād want to know whether doing research on fairness or privacy is more impactful on the margin. But engineers developing AI applications have to address all ethical issues simultaneously; for example, this paper on AI in global development discusses all of them. As an engineer deciding what project to work on, Iād have to know for which causes deploying AI would make the greatest difference.
What do you think are the most pressing āmainstreamā ethical issues in AI? (fairness, interpretability, privacy, attention design, etc.)
How do you think the public interest tech movement (which encompasses tech-related public policy, doing software development or data science for social good, etc.) could be more effective?
I think thereās a lot of issues that have applicability to both short- and long-term concerns, like:
Interpretability/ātransparency
Foresight/āPredicting AI trends
Future of work/āeconomics of AI/ālabor economics
Aligning recommender systems
Disinformation
Cybersecurity
Automation of governmental systems
Relatedly, this is why people who canāt immediately work for an EA-aligned org on ālong-termā AI issues can build both useful career capital and do useful work by working in more general AI policy.
For the second question, a pretty boring EA answer: I would like to see more people in near-term AI policy engage in explicit and quantifiable cause prioritization for their work. I think that, as EAs generally recognize, the impact between these things probably varies quite a lot. That should guide which questions people work on.
Thatās a really good suggestion! Do you know of any attempts at cause prioritization in near-term AI policy? I think most AI policy wonks focus on near-term issues, so publishing a stack ranking could be really influential.
I donāt! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.
Iām not sure whom this ranking would be relevant for, though. If youāre interested in basic research on AI ethics, youād want to know whether doing research on fairness or privacy is more impactful on the margin. But engineers developing AI applications have to address all ethical issues simultaneously; for example, this paper on AI in global development discusses all of them. As an engineer deciding what project to work on, Iād have to know for which causes deploying AI would make the greatest difference.
Iād imagine thereās an audience for it!
I think so too. I created a question here to solicit some preliminary thoughts, but it would be cool if someone could do more thorough research.