That’s a really good suggestion! Do you know of any attempts at cause prioritization in near-term AI policy? I think most AI policy wonks focus on near-term issues, so publishing a stack ranking could be really influential.
I’m not sure whom this ranking would be relevant for, though. If you’re interested in basic research on AI ethics, you’d want to know whether doing research on fairness or privacy is more impactful on the margin. But engineers developing AI applications have to address all ethical issues simultaneously; for example, this paper on AI in global development discusses all of them. As an engineer deciding what project to work on, I’d have to know for which causes deploying AI would make the greatest difference.
That’s a really good suggestion! Do you know of any attempts at cause prioritization in near-term AI policy? I think most AI policy wonks focus on near-term issues, so publishing a stack ranking could be really influential.
I don’t! Would be interesting to see! From an EA perspective, though, flowthrough effects on long-term stuff might dominate the considerations.
I’m not sure whom this ranking would be relevant for, though. If you’re interested in basic research on AI ethics, you’d want to know whether doing research on fairness or privacy is more impactful on the margin. But engineers developing AI applications have to address all ethical issues simultaneously; for example, this paper on AI in global development discusses all of them. As an engineer deciding what project to work on, I’d have to know for which causes deploying AI would make the greatest difference.
I’d imagine there’s an audience for it!
I think so too. I created a question here to solicit some preliminary thoughts, but it would be cool if someone could do more thorough research.